;

How to Install Exim Mail Server on Ubuntu 14

Exim Mail ServerExim was designed as a mail transfer agent (MTA) that was going to replace the preexisting system from the University of Cambridge. While Exim was based on the older MTA system, it has since grown into a unique, functional system; Exim has become popular due to the flexibility and compatibility with Unix-like operating systems.

The straightforward setup that comes with Exim allows users to complete setup easily, even without an in-depth knowledge of MTA systems. Additionally, Exim can handle a lot of tasks concurrently, and it is frequently updated with bug fixes and system updates.

Getting Started

Before we start installing your Exim mail server, we need to confirm you have a node available. This node can be on Cloud Server or a Dedicated Server, and it needs to have Ubuntu 14.04 LTS installed.

You also need to confirm you’ve set up Secure Shell (SSH) root access for your server, which you’ll complete after setting it up to run Ubuntu 14.04 LTS. If you’re not familiar, SSH is a network protocol that’s used when you need to execute secure services over an unsecured connection. By using SSH root access, you can engage in activities even if your connection is not secure.

Tutorial

Installing the Mail Server

Now that you have confirmed your node is available, Ubuntu 14.04 LTS is installed, and you have SSH root access on your node, it’s time to install Exim.

The first thing to do during this installation is to confirm your server is running the most recent, updated software version and that any repositories that need to be brought up to date are updated:
apt-get update && apt-get upgrade -y

After you’ve verified that your server is up to date, you can proceed with the installation of the Exim mail server :

apt-get install exim4 -y

The server will reach a point where it prompts you that the installation is complete, which means you can begin configuring your new Exim mail server. Follow the command below to start configuration:
dpkg-reconfigure exim4-config

During the configuration process, you will be asked a series of questions to guide you through the process. The answers listed below should be used to complete your configuration:
1- General type of mail configuration: Internet site; mail is sent and received directly using SMTP
2- System mail name: (please enter your server DNS hostname name)
3- IP-addresses to lsiten on for incoming SMTP connections: 127.0.0.1 ; ::1 (validate with ENTER)
4- Other destinations for which mail is accepted: [EMPTY]
5- Domains to relay mail for: [EMPTY]
6- Machines to relay mail for: [EMPTY]
7- Keep number of DNS-queries minimal (Dial-on-Demand)?:
8- Delivery method for local mail: mbox format in /var/mail/
9- Split configuration into small files?:
10- Root and postmaster mail recipient: [EMPTY]

Now that you’ve completed the configuration of your server, we can check that your new Exim mail server is running correctly:
systemctl exim4 status

Congratulations! You’ve completed all the steps necessary to install an Exim mail server on your dedicated node running Ubuntu 14.04 LTS!

Conclusion

You’ve completed the steps necessary to install Exim on your server running Ubuntu 14.04 LTS. Now that you’re completed the process, your Exim mail server is ready to move into production. If you found this guide useful and it helped you complete the setup of your new mail server, please share it with others that are searching for guidance on setting up an Exim server.

The Apache Foundation

How To Set Up Apache Virtual Hosts on Ubuntu 14

One of the great strengths of Apache is the ability to create multiple hosts on the same server. As long as the hardware can support it, you can run multiple websites on one server through virtual hosts. This article will describe the setup process for Ubuntu 14.04 LTS.

Getting Started
To complete this walkthrough successfully the following are required:
• A node (Dedicated or Cloud Server) running Ubuntu 14.04 LTS
• All commands must be entered as root
• A full LAMP stack implementation

Tutorial
Let’s start by copying the default virtual host configuration on your server. The file name is called 000-default.conf located in the /etc/apache2/sites-available directory.
cd /etc/apache2/sites-available/
cp 000-default.conf globo.tech.conf

Once the globo.tech.conf file has been created, open that file with any editor you feel comfortable using. We are going to make sure that we have valid entries for the server port, admin email, server name and server alias. We also need to make sure that the DocumentRoot points to the location where we will store the globo.tech html files.
nano /etc/apache2/sites-available/globo.tech.conf

<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
#ServerName www.example.com
ServerAdmin webmaster@globo.tech
DocumentRoot /var/www/globo.tech/public_html
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</VirtualHost>

Now that we’ve entered those values, save and close the file. Navigate to the /var/www directory. This is where we will create the globo.tech directory structure to store its web pages. Then we will make sure that Apache has permissions to the directory structure.
mkdir -p /var/www/globo.tech/public_html
chmod 755 /var/www/globo.tech/public_html/
chown www-data /var/www/globo.tech/public_html/

The final step in this process is to enable the new virtual host and restart Apache.

a2ensite globo.tech
service apache2 reload

You can test the virtual host by opening up your browser and navigating to http://localhost/globo.tech. You can copy the steps above and create another website on the same machine as long as you use a different name for the domain.

Conclusion
Virtualization on Apache is a very powerful tool for creating multiple websites on the same server. Using the instructions we’ve provided, you can create and test multiple websites before deploying them on the internet. The number of websites contained in one machine is only limited by the ability of the hardware to support them. If you’ve enjoyed this KB, please consider sharing with your friends.

Minecraft

How to Setup a Minecraft Server on Ubuntu 14

Minecraft servers are designed for cooperative play with other players online or through a local area network (LAN) connection. These servers can run on your hosted server, local dedicated hardware server, local gaming computer, or virtual private server hosted on a personal machine.

Each Minecraft server requires default software provided by Mojang, which functions on Windows, Mac OS X, or Unix-based systems. Additionally, Mojang offers different server options, including LAN servers, external server clients, a rented server, and different realms that may vary.

Getting Started

In order to follow this guide you will need to have the following in place:
• One node (Cloud Server or Dedicated Server) that has Ubuntu 14.04 LTS installed.
• SSH Root Access to your server

Tutorial

Server Configuration

To begin, you need to verify that your server is currently up to date:
apt-get update && apt-get upgrade -y
After confirming that your server is current, checking that the most recent version of Java has been installed is next:
java -version

If it’s confirmed that the latest version of Java is not installed, you may receive a warning stating “The program ‘Java’ can be found in the following packages.” If this is the case, you will need to install Java through the following command (confirming with the Enter/Return key when prompted):
add-apt-repository ppa:openjdk-r/ppa
apt-get update
apt-get install openjdk-8-jdk -y

During setup, you will also need to install a screen package that will allow your server to continue operating, regardless of your ssh connection status:
apt-get install screen -y

Installing Minecraft

To begin, you will establish a folder in your /home path:
mkdir /home/minecraft ; cd /home/minecraft

Following that, you will download the current version of the server software from Mojang:
wget -O minecraft_server.jar https://s3.amazonaws.com/Minecraft.Download/versions/1.12.1/minecraft_server.1.12.1.jar

Once the download has finished, you can start the server running as a daemon:
screen -S "Minecraft"

At this point, you have almost completed setting up your server for Minecraft, but you will need to accept and verify that the End User License Agreement (EULA) has been accepted as true. We encourage you to read through the EULA entirely before accepting it.

After you’ve read through the EULA, you will want to create a text file, called eula.txt, to set it as true:
touch eula.txt
echo "eula=TRUE" > eula.txt

Now that you have finished reading the EULA and accepted it, you can start your new server:
java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui

As your server starts, you will observe the following in your console window:
root@globotech-minecraftserver-ubuntu14:/home/minecraft# java -Xmx1024M -Xms1024M -jar minecraft_server.jar nogui
[15:12:05] [Server thread/INFO]: Starting minecraft server version 1.12.1
[15:12:05] [Server thread/INFO]: Loading properties
[15:12:05] [Server thread/WARN]: server.properties does not exist
[15:12:05] [Server thread/INFO]: Generating new properties file
[15:12:05] [Server thread/INFO]: Default game type: SURVIVAL
[15:12:05] [Server thread/INFO]: Generating keypair
[15:12:06] [Server thread/INFO]: Starting Minecraft server on *:25565
[15:12:06] [Server thread/INFO]: Using epoll channel type
[15:12:06] [Server thread/INFO]: Preparing level "world"
[15:12:06] [Server thread/INFO]: Loaded 488 advancements
[15:12:07] [Server thread/INFO]: Preparing start region for level 0
[15:12:08] [Server thread/INFO]: Preparing spawn area: 7%
[15:12:09] [Server thread/INFO]: Preparing spawn area: 14%
[15:12:10] [Server thread/INFO]: Preparing spawn area: 23%
[15:12:11] [Server thread/INFO]: Preparing spawn area: 31%
[15:12:12] [Server thread/INFO]: Preparing spawn area: 37%
[15:12:13] [Server thread/INFO]: Preparing spawn area: 46%
[15:12:14] [Server thread/INFO]: Preparing spawn area: 54%
[15:12:15] [Server thread/INFO]: Preparing spawn area: 63%
[15:12:16] [Server thread/INFO]: Preparing spawn area: 73%
[15:12:17] [Server thread/INFO]: Preparing spawn area: 84%
[15:12:18] [Server thread/INFO]: Preparing spawn area: 94%
[15:12:19] [Server thread/INFO]: Done (12.866s)! For help, type "help" or "?"

Congratulations! You’ve finished setting up your new Minecraft gaming server on Ubuntu 14.04, and you can exit the screen by hitting CTRL + a + d. If you choose to reattach the screen, you can do so in the following manner:
screen -R

If necessary, you can edit your server’s configuration through the following path:
nano /home/minecraft/server.properties

Conclusion

Your Minecraft server setup is complete, and you’re ready to begin utilizing the server for LAN or online gameplay in cooperative mode. If you found this setup guide useful, please share it with others looking to setup their game server.

How to Achieve High Availability Load Balancing with Keepalived on Ubuntu 14

Load balancing is a way to distribute workloads across multiple computing resources such that large, resource-intensive tasks may be safely and reliable completed with more efficiency and speed than if just one machine were to perform the tasks.

Keepalived is a free and open source load balancing solution for Linux systems. This guide will go over the installation process for Keepalived on Ubuntu 14.04 and how you can use it for load balancing on your Linux clusters.

Getting Started

Before you begin to follow the steps in this guide, make sure that you meet these requirements:
• Two servers (Cloud Server or Dedicated Server), each running a fresh installation of Ubuntu 14.04. We will call these servers LB1 and LB2 below
• Both servers connected to the same LAN
• Root access to both servers

For the purposes of this guide, we’ll be working with a public network of 173.209.49.66/29 and a class A private network, or LAN, of 10.119.0.0/24.

Tutorial

For your reference, here are the servers, or load balancers, we’ll be working with, along with their respective public and private IP addresses. Where necessary, remember to replace with the IP addresses for your own servers.

LB1
Public:173.209.49.66
Private:10.119.0.1

LB2
Public:173.209.49.67
Private:10.119.0.2

The load balancers will make use of a “floating IP”, and we’ll configure active and passive redundancy as well.

Floating
Public:173.209.49.70
Private:10.119.0.10

The first task is to ensure that the systems of both servers are fully up to date.

apt-get update
apt-get -y upgrade

If it’s installed, disable Ubuntu’s default firewall.

ufw disable

The next step is to install Keepalived and all necessary dependencies.

apt-get install linux-headers-$(uname -r) keepalived

Use this command to activate Keepalived on boot. We’ll also enable the ipvsadm kernel module.

update-rc.d keepalived defaults
modprobe ip_vs

Now we’ll have to configure Keepalived for our setup.

echo "" > /etc/keepalived/keepalived.conf
nano /etc/keepalived/keepalived.conf

Here’s the alterations for the LB1 server.

vrrp_instance VI_LOCAL {
interface eth1
state MASTER
virtual_router_id 51
priority 101
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state MASTER
virtual_router_id 52
priority 101
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

And these alterations will be applied to the LB2 server.

vrrp_instance VI_LOCAL {
interface eth1
state BACKUP
virtual_router_id 51
priority 100
virtual_ipaddress {
10.119.0.10
}
track_interface {
eth0
eth1
}
}
vrrp_instance VI_PUB {
interface eth0
state BACKUP
virtual_router_id 52
priority 100
virtual_ipaddress {
173.209.49.70
}
track_interface {
eth0
eth1
}
}
virtual_server 173.209.49.70 443 {
delay_loop 4
lb_algo sh # source hash
lb_kind NAT
protocol TCP
real_server 10.119.0.100 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 443 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}
virtual_server 173.209.49.70 80 {
delay_loop 4
lb_algo wrr # weighted round robin
lb_kind NAT
protocol TCP
real_server 10.119.0.100 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
real_server 10.119.0.101 80 {
weight 1
TCP_CHECK {
connect_timeout 15
nb_get_retry 3
delay_before_retry 2
}
}
}

The “virtual_router_id” setting must be unique for each of the defined VRRP instances, and it should also be unique within your VLAN. Make sure you’re not using the same ID on any two clusters connected via the same physical switch or VLAN. Naturally, this ID needs to match on both LB1 and LB2 for the same VRRP instance. Valid values are from 0 to 255.

Now, enable nf_conntrack and we can proceed to sysctl configuration.

modprobe nf_conntrack
nano /etc/sysctl.conf

Alter the configuration so it matches the below:

net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.nf_conntrack_max = 1000000

Now, apply the changes.
sysctl -p

Finally, we can start up Keepalived.

service keepalived start

Let’s verify that Keepalived is running in the expected manner. First we should verify that both of the floating IPs are assigned to the first keepalived instance.

Using the command ip addr show, you can see if the IPs are present:

root@lb1:/etc# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:e4:2f brd ff:ff:ff:ff:ff:ff
inet 173.209.49.66/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:e42f/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ea:2d brd ff:ff:ff:ff:ff:ff
inet 10.119.0.1/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:ea2d/64 scope link
valid_lft forever preferred_lft forever

If everything’s been set up correctly, you’ll see 173.209.49.70 and 10.119.0.10 on LB1. If you shut down keepalived on LB1, those same IP addresses will appear on the second server.

root@lb1:/etc# systemctl stop keepalived

After shutting down keepalived on LB1, go on the second server and see if it indeed has those IP addresses assigned:

root@lb2:~# ip addr show

1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ae:b8 brd ff:ff:ff:ff:ff:ff
inet 173.209.49.67/29 brd 173.209.49.71 scope global eth0
valid_lft forever preferred_lft forever
inet 173.209.49.70/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:aeb8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:8e:ed:ba brd ff:ff:ff:ff:ff:ff
inet 10.119.0.2/24 brd 10.119.0.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.119.0.10/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::250:56ff:fe8e:edba/64 scope link
valid_lft forever preferred_lft forever

Finally, make sure that the backends are well specified within keepalived’s config:

root@lb1:/etc# ipvsadm

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 173.209.49.70:http wrr
-> 10.119.0.100:http Masq 1 0 0
-> 10.119.0.101:http Masq 1 0 0
TCP 173.209.49.70:https sh
-> 10.119.0.100:https Masq 1 0 0
-> 10.119.0.101:https Masq 1 0 0

Conclusion

Now that you’ve completed this guide, you should be able to set up Keepalived on your own clusters. Keepalived is scalable, so you can try it out on clusters of any size. If you found this article helpful, feel free to share it with your friends and let us know in the comments below!

The LAMP software stack refers to a setup on a Linux node that includes the Apache HTTP web server, the MySQL database management system (RDBMS), and finally, the PHP programming language. These four components come together in order to allow dynamic web applications and content to be run from the Ubuntu 14 server. While the components themselves are actually largely interchangeable with other tools that serve the same functions, the original LAMP stack remains highly popular due to its robust character.

This guide will show you exactly how you can install and configure LAMP on your Ubuntu 14 server.

Getting Started

To complete this guide, you will need the following:
• 1 Node (Cloud Server or Dedicated Servers) with Ubuntu 14.04 LTS installed.
• All commands should be run as the root user

If you don't want to go through the whole process of setting up a basic LAMP for your Ubuntu 14.04 server, you can always try our One-Click Apps and get a fresh LAMP in seconds.

Tutorial

This guide will cover how to install each aspect of the LAMP stack on your Linux system. However, before we can proceed it is necessary to update the local package repository information on your server using the command apt-get update as root. This command will fetch the latest information about what packages and package versions are available for installation on your system:

apt-get update

Once this command completes, you are ready to continue.

Installing Apache

The Apache HTTP web server can be easily installed using the default package manager apt-get. With this single command, which uses the -y flag to simplify the installation by removing the “yes” prompt, Apache will be installed on your Ubuntu 14 server using the apache2 package:

apt-get -y install apache2

After the installation completes, you can verify that the Apache web server is working as it should. Open up the following page in your web browser, replacing the text YOUR_SERVER_IP with the IP address of your server:

http://YOUR_SERVER_IP

You will see the default Ubuntu 14 Apache webpage displayed with some basic configuration information if everything installed as it should.

Installing MySQL

To install the MySQL database management system, we will again use the apt-get utility, this time for the mysql-client and mysql-server packages:

apt-get -y install mysql-client mysql-server

You will be prompted to set a root password for MySQL in the installation process. This password will be valid on your server for the root@localhost account. By setting the password now, you will not have to set it at a later step so it is recommended to choose a secure and strong password that you will remember, as we will need it later on in the tutorial.

To create the MySQL database directory structure where MySQL will store information, type the following:

mysql_install_db

If your Ubuntu 14 server is for production use, or you wish to enhance your server security for any other reason, there is an additional step that you can take that will enable higher security levels for your MySQL database by applying several security tweaks. Enter the following command, follow the interactive script, and when prompted, provide the MySQL root user password that you just set in the above step.

mysql_secure_installation

One example of the security modifications that are made while running the secure MySQL installation is the removal of several sample users and databases. It is important to note that it will also disable remote root logins. Such changes will be immediately loaded after the process completes so that MySQL will run as secure as you need it on the spot.

Having completed the these steps, the MySQL database management system is installed on your Ubuntu 14 server.

Installing PHP

The final component of the LAMP platform stack is the programming language PHP, which is what makes dynamic content possible on your web server. PHP is highly customizable, and multiple modules exist for the numerous PHP extensions that you can use. However, you can use this command to install PHP with its default core modules and to use with SQL as a start:

apt-get php5 php5-mysql

If you would like to also install further modules for PHP, you can add them by modifying the above command to fit the following format, listing the additional packages separated by spaces:

apt-get package_1 package_2 ... package_n

For example, this means that if you want to also download and install the helper packages, libapache2-mod-php5 and php5-mcrypt, your call to apt-get should look as follows:

apt-get libapache2-mod-php5 php5-mcrypt

The apt package manager also possesses a command that will list all the available PHP modules and libraries for you, so that you can better decide what you need and what is available. Execute the following command, which uses apt-cache to search for all packages that begin with the text php5- in the Linux repository:

apt-cache search php5-

The output of the search command will be a list that will look something like the following, listing the module first, followed by a brief description:

php5-cgi - server-side, HTML-embedded scripting language (CGI binary)
php5-cli - command-line interpreter for the php5 scripting language
php5-common - Common files for packages built from the php5 source
php5-curl - CURL module for php5
php5-dbg - Debug symbols for PHP5
...

For further information about each module, you have the option of either searching online or using another handy command from the package manager apt:

apt-cache show package_name

The show command will display a long description for your desired package alongside with other metadata.

Configuring Apache

Now that PHP is installed on the Ubuntu 14 server, you will need to make some changes to the Apache configuration in order for your web server to be able to handle PHP. When Apache serves the files within the webroot directory located by default in /var/www/html/, it will look first for a file called index.html. However, we do not want this behavior when we are using PHP. Instead, we want the Apache server to prefer PHP files and look first for a file called index.php.

To accomplish this modification to the files that the Apache web server prioritizes, open up the following Apache configuration file in the nano text editor:

nano /etc/apache2/mods-enabled/dir.conf

This file should look as the following:

<ifmodule mod_dir.c="mod_dir.c">
DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm
</ifmodule>

Note the location of index.php and index.html in the Apache configuration file. The order in which the files appear dictates the preference of Apache, so the leftmost files listed will always be attempted to be served first before ones to the right. What we want to do is to change the preference in order for PHP files to be served first. Do this by moving the index.php text within its line to the left, until it appears immediately following DirectoryIndex and to the left of index.html. Shown below is what the modified line should look like:

<ifmodule mod_dir.c="mod_dir.c">
DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm
</ifmodule>

After making this change, save the file and close the nano editor. In order for the changes we just made to be recognized, we will need to restart the Apache web server using this command:

service apache2 restart

Testing the Setup

With all the necessary LAMP components installed and configured, you can verify your stack is set up and running as it should be by creating a basic PHP file that you can then view in your browser.

Go to the webroot folder at /var/www/html/. Remember, it is from this directory that the files will be served by Apache, so you will need to create the test file in there. Change directories:

cd /var/www/html/

Next, open up the nano text editor to the new file called index.php with the following command:

nano index.php

Type the following into the open index.php file. This short PHP script will display the PHP info when viewed from its webpage:

<?php
phpinfo();
?>

Save and close the file. Next, open up the relevant webpage in your browser, replacing YOUR_SERVER_IP with the IP address of the Ubuntu 14 server you are using:

http://YOUR_SERVER_IP/info.php

If the page correctly opens and displays information concerning the PHP version of your server, then congratulations, your setup is working!

Conclusion

With the basic LAMP stack installed, configured, and ready to use on your Ubuntu 14 server, you are now able to easily host websites and web applications with dynamic content. You can now go ahead and explore your options for what your web server can do, through applications such as Moodle or WordPress. Don’t forget, feel free to share this tutorial with others who may be interested if you found it useful!

Backups are a critical facet of any IT infrastructure for many reasons. Unfortunately, there are many ways to take a good backup, and it is not safe or efficient to simply copy your server’s content to a new location. Further, some services cannot be backed up simply by copying their data, and must instead use custom logic to ensure that their persistent storage is securely snapshotted. To address this complex use case, a variety of server backup tools have been created. R1Soft is a premier solution targeted at multi-tenant hosting environments. Hosting environments are among the most challenging to back up, as it is impossible to predict the individual backup needs for every service being run by every user. Further, backups must be coordinated in an off-site location, and centralized manipulation and monitoring is essential. Read on to learn how to set up an R1soft agent on an Ubuntu 14.04 LTS server.

Getting Started

To complete this guide, you will need the following:
• 1 Node (Cloud Server or Dedicated Servers) with Ubuntu 14 installed.
• All commands should be run as the root user

Tutorial

The R1Soft packages are shipped in their own Ubuntu package repository, making installation and upgrades a snap. Here we add this repository so the packages can be installed via apt-get.

echo "deb http://repo.r1soft.com/apt stable main" > /etc/apt/sources.list.d/r1soft.list
wget http://repo.r1soft.com/r1soft.asc
apt-key add r1soft.asc

With the new repository in place, Ubuntu’s package cache must be updated so the new packages can be found.

apt-get update

Next we install the package for the R1Soft backup agent.

apt-get install r1soft-cdp-enterprise-agent

You’ll now need the R1Soft key from your backup manager. In this example, we use the IP address 192.168.10.10. Substitute the appropriate IP from your infrastructure in the following command:

r1soft-setup --get-key http://192.168.10.10

Here we’ll install the R1Soft driver for your distribution. In this case, you’ll use the hcp driver.

r1soft-setup --get-module

With all the necessary pieces in place, we’ll need to restart the R1Soft agent so your changes are detected.

/etc/init.d/cdp-agent restart

Conclusion

Your Ubuntu server is now running the R1Soft agent. You can now integrate it into your larger R1Soft infrastructure for reliable, centralized backups. Share this article with anyone needing a good backup solution for their servers. Backups are important, and anyone not performing them regularly will eventually regret not having done so. Don’t forget, feel free to share this tutorial with others who may be interested if you found it useful!

Moodle, which stands for “modular object-oriented dynamic learning environment,” is a type of software that is known as a learning management system. It is free, open-source, and written in PHP. Moodle is most commonly used in e-learning projects for schools, universities, and even workplaces and allows a mixture of interactive and informative content to be stored primarily for learning purposes and be accessed from the internet. Thanks to its immense variety of customizable management features and community-sourced plugins, Moodle is a popular way to create websites for private use tailored towards learning to suit any environment.

To install Moodle on your Ubuntu 14 server, follow this guide.

Getting Started

You will need the following in order to complete this tutorial:
• 1 Node (Cloud Server or Dedicated Server) running a clean installation of CentOS 7.
• All commands must be entered as root

A web application like Moodle also requires that your Ubuntu 14 server contains a particular set of tools, known as LAMP. This term is an acronym that refers to Linux, Apache, MySQL, and PHP all installed on a single server to be used together. This particular stack is notable for enabling the server to host dynamic websites and web applications by running the Apache web server on the system, storing website data in the MySQL database, and using PHP for dynamic content processing. Do not worry if your server does not have these tools yet, we will cover it in the first part of the Moodle installation guide.

Tutorial

It is always good practice to update your local package repository before downloading any new software. This allows for you to have the most recent information about which packages and package versions are available. As root, run the apt-get utility with the option update. This utility is the built-in package installation and management tool for Ubuntu and can also be used to fetch new package information, as we will do in this step:

apt-get update

If you already have the packages specified in the LAMP acronym running on your Linux system, you can proceed to the following section concerning further prerequisites. However, if you do not have Apache, MySQL, and/or PHP, then you must install these prerequisite packages.

Installing Other Moodle Prerequisites

Beyond LAMP, your Ubuntu server will also need further packages to be able to run Moodle. These packages include more advanced utilities for PHP and running PHP with the MySQL database. A notable and very important package out of these remaining prerequisites is git-core. This package will install the distributed version control system Git on your Ubuntu server. Git is vital as it will be used to install and/or update the Moodle Core Application on your server. Install the PHP packages and Git with the apt-get utility:

apt-get -y install graphviz aspell php5-pspell php5-curl php5-gd php5-intl php5-mysql php5-xmlrpc php5-ldap git-core

Downloading Moodle and Configuring the Server

With the prerequisites setup for Moodle, we can now proceed to actually downloading Moodle and making some configurations for it to work. For the sake of this guide, we will use the /opt repository as our installation location, however you are free to choose your own and thus just replace all references to this repository with your own. First, change to the repository /opt:

cd /opt

Next, fetch the Moodle repository from Git. This step is why it is highly important that Git is installed on your system. If you are behind a corporate firewall, you may need to play around with your proxy settings if you encounter any issues fetching from Git. To get Moodle using Git, we will perform the action clone, which will make a copy of the main Moodle code from the Git repository hosted on their website. This local repository is linked to the main repository using Git upon creation:

git clone git://git.moodle.org/moodle.git

The previous step only made a copy of Moodle, but did not yet install it. Change to the newly created Moodle repository that will appear after the cloning process in the main directory /opt:

cd moodle

The different versions of Moodle are stored in branches within this Git repository. This means that multiple versions of the Moodle code exist, with each version being known as a Git “branch.” To retrieve a list of all branches that are available on the remote code source, use:

git branch -a

After fetching the branch list, setup your local repository to track the specific branch you want on the remote repository, in this case MOODLE_30_STABLE. What this does is link the two branches and tell Git which one to use specifically for tracking. This branch we track will be the version of Moodle we will then install. To select a different version, for example version 2.7, you can find online which version is available and its corresponding branch name, and then replace MOODLE_30_STABLE with the version you want.

git branch --track MOODLE_30_STABLE origin/MOODLE_30_STABLE

With the branch we want being tracked, we can now switch to it in our local Moodle repository to be able to start using that version of Moodle code. Again, replace MOODLE_30_STABLE with the version that you want to use instead if it differs.

git checkout MOODLE_30_STABLE

Now that we have the code for our desired Moodle version, this code must be copied into the document root directory, also known as the webroot, found in /var/www/. This directory is the location of the Apache web server installation for hosting your website. Use the command cp to copy the Moodle directory with the flag -R to ensure that we copy everything recursively. We intentionally setup the local Moodle Git repository outside the Apache webroot so that it will be easier to plan and stage upgrades in the future for Moodle to customize its plugins to work for you.

cp -R /opt/moodle /var/www/html/

Setting up Moodle will also require that you create another directory after the previous one that will host the actual data for Moodle. It will also be created in the /var location using the command mkdir:

mkdir /var/moodledata

These two directories we created will need some changes to their permissions and ownership before they can be used for Moodle. Make these changes using the chown and chmod commands to respectively change the ownership and permissions for these new directories with the following commands:

chown -R www-data /var/moodledata
chmod -R 777 /var/moodledata
chmod -R 0755 /var/www/html/moodle

Configuring MySQL

Your MySQL configuration must be tweaked in order to use Moodle. You can always find the latest information for how these tweaks must be made for each Moodle version online, as it can differ. For the version we have installed, for example, we will need to change the default storage engine to innodb, modify the file format to Barracude specifically for this version, and set the storage engine’s innodb_file_per_table in order for the file format to work properly. Note that while this version should automatically select innodb as default during installation, this may not be true for the Moodle version you chose, and thus it is always good practice to set it to default anyways. Open the configuration file with the editor vi in order to make the required changes:

vi /etc/mysql/my.cnf

With the file open, locate the [mysqld] section and find “Basic Settings.” Under the last statement, add the following lines before saving and closing the file:

default_storage_engine = innodb
innodb_file_per_table = 1
innodb_file_format = Barracuda

For the changes we just made to take into effect, we must now restart MySQL using this command:

service mysql restart

Following this, you will need to actually create the database for Moodle and the Moodle MySQL user with the correct permissions. For this step, you absolutely need the password you set above for the MySQL user during installation, replacing the PASSWORD in the line below with your own, in order to open the MySQL prompt.

mysql -u root -p PASSWORD

With the terminal having changed to MySQL (the line should now read mysql>), you can now configure and communicate with the database using SQL, short for Structured Query Language. SQL is a language written specifically for interacting with databases. Run the following line to create the database for Moodle. Replace the text moodle with a different database name if you wish, otherwise the database will be named moodle:

CREATE DATABASE moodle DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;

In the line below, replace USERNAME and PASSWORD with the username and password of the Moodle user and keep the single quotation marks. This will create that user:

create user 'USERNAME'@'localhost' IDENTIFIED BY 'PASSWORD';

With the Moodle user created, you will need to give the user permissions to actually interact with Moodle. Again, replace USERNAME and PASSWORD with your own information.

GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,CREATE TEMPORARY TABLES,DROP,INDEX,ALTER ON moodle.* TO USERNAME@localhost IDENTIFIED BY 'PASSWORD';

There is a possibility that you will have an error occur during the user creation that involves a password hash. This will happen if you use MySQL 5.6+. To solve it, you must use the hash value of the password instead. You can obtain this value by entering the following in the SQL prompt, replacing PASSWORD with your password:

SELECT password('PASSWORD');

The output of the above command should look something like *AD51BAFB2GD003D3480BCED0DH81AB0BG1712535. This is the hash of your password, and you must use this in the sections in the above commands that begin by IDENTIFIED BY ‘ instead of your actual password.

When you are finished, use this command to exit the SQL prompt:

quit;

Installing Moodle

With the server setup and ready for Moodle, we finally get to the actual installation. The webroot (/var/www) will need to be made temporarily writable in order for the config.php file to be created correctly during the online setup. Change the webroot permissions using the command chmod:

chmod -R 777 /var/www/html/moodle

Open your browser to the following page, replacing IP.ADDRESS.OF.YOUR.SERVER with the IP address of your Ubuntu 14 node:

http://IP.ADDRESS.OF.YOUR.SERVER/moodle

The prompts will guide you during the Moodle installation. These prompts will include the following:
1. Changing the path for moodledata to /var/moodledata
2. Selecting mysqli was the database type
3. Setting the host server to localhost under database settings
4. Entering the name of the Moodle database (“moodle” in our guide)
5. Entering the username and password of the Moodle user that you created
6. Setting the tables prefix to mdl_
7. Creating a site administrator account and password

The online installation will also helpfully provide an environment check to notify you if any of the required components to run Moodle are not installed, or not correctly installed, on your system. Go through all the prompts until the installation is complete.

It is vital to now change the permissions in the Moodle directory back again so it is not writable after completing the installation:

chmod -R 0755 /var/www/html/moodle

At this step, we are almost done with the installation and configuration of Moodle! The last step is to set the system paths, as it will provide better Moodle performance. To do this, open up the Moodle webpage again, and navigate to:

Site Administration > Server > System Paths

On this page, input the following information exactly as it is written here:
Path to Du: /usr/bin/du
Path to Apsell: /usr/bin/aspell
Path to dot: /usr/bin/dot

Finally, click on Save Changes to finish path addition.

Conclusion

Congratulations! After completing all of the above steps to install Moodle’s prerequisites, configure your system, and do the actual installation of Moodle, you should now have a working setup that you can access at http://IP.ADDRESS.OF.YOUR.SERVER/moodle! If this tutorial was of use to you, feel free to share it.

How To Set Up an NFS Server on Ubuntu 14

NFS, which stands for Network File System, is a distributed filesystem protocol that allows for the mounting of remote directories from an NFS server onto a client server. These directories can then be accessed from the client much like local storage, and the mounting process can even be automated for greater convenience. Using NFS mounts is a useful ability in situations where space usage needs to be optimized and multiple clients need to access the same server space, with the main advantage of NFS being its allowance for central management of files. This decreases administrator workload as well as enhancing the sharing possibilities of individual files and repositories.

In this tutorial, we will show you how configure NFS mounting on an Ubuntu 14.04 server and demonstrate its usage in a typical host-client scenario.

Getting Started

Confirm that you have the following before you follow this guide:
• 2 Node (Cloud Server or Dedicated Server) running Ubuntu 14.
• Root access to the node.

Tutorial

The following two servers are used in our tutorial as our client and NFS servers as well as their respective IP addresses. You should replace these addresses with your own server values.
Server 1 : client – 10.10.0.134
Server 2 : nfs-server – 10.10.0.135

Initial Setup

In order for our two servers to be able to communicate in this tutorial, we need to add them to each other’s respective hosts files. In both the servers to be used, open the /etc/hosts file:
sudo nano /etc/hosts

Add the following entires to this file:
10.10.0.134 client
10.10.0.135 nfs-server

Make sure that this is done for both of the Ubuntu servers before proceeding. This allows us to use the names for the machines instead of their IP addresses in the rest of the tutorial. You should replace the IP addresses we used for the IPs of your own machines.

Setup the NFS Server

We begin by setting up the server to be used as our NFS host, so all steps in this category must be executed on that server. Before we install any new packages required for our NFS host setup, we first need to refresh our local package information using the following command:
apt-get update

Now we are ready to install the nfs-kernel-server package.
apt-get install nfs-kernel-server

The package installed in the above step allows for the sharing of repositories. We will create one such repository on the NFS host in /sharednfss to be later shared with this command:
mkdir /sharednfs

The directory we just created is owned by the root. Note that if you want to however create a directory specifically for sharing that is not owned by the root user, you must change its ownership using sudo chown nobody:nogroup /sharednfs. This is unnecessary for our case and we will proceed with the root ownership.

With our directory for sharing in place and the NFS package install, continue on the NFS server to configure how we will share the directory. First, create the exports file:
sudo nano /etc/exports

The exports file contains information on the configuration of our NFS mount and will look something like the following, with each line representing the rules for each directory to share:
/first/directory client(option1,option2,...)
/second/directory client(option1,option2,...)

This means that we want to modify the /etc/exports file to contain the information of our created directory and our client server. Don’t forget to replace our client address from this tutorial with the address of your own client server in your hosts file! Create a line like the following in the file using the text editor nano:
/sharednfs client(rw,sync,no_root_squash,no_subtree_check)

The specific NFS options we enabled are used to control how the files are accessed. To begin, rw gives a client both read and write access to the repository. This means that the shared repository can be edited by both the NFS host server and the client server. The next option, sync, makes it possible for NFS to store changes to disk before replying to the client, ensuring consistency in the files the client can see. Finally, no_subtree_check ensures that no problems occur if files are renamed while the client has them open by preventing subtree checking (host checks whether a file is actually available). If the repository being shared is owned by a root user, as in our case, you can use the option no_root_squash to circumvent a built-in NFS security feature that restricts root accounts on clients to use host directories as root.

After making your changes, save and close the exports file. To restart the NFS service and load the new modified export file, execute the following two commands:
exportfs -a
service nfs-kernel-server start

Setup the Client Server

With our NFS host setup, the client server must also be configured to have access to the shared repository. Much like for the host NFS server, we also need to refresh our local package information before installing any packages. Use the following command to update local information:
apt-get update

After updating our package repository information, we need to install the nfs-common package to allow NFS functionality without the server components.

apt-get install nfs-common

Because NFS shared repositories are “mounted” on the client system, we need to create a mount point for the desired shared repository. Such mount points are typically located in /mnt on a filesystem so we will create ours there as well. Create the shared directory where the nfs server repository will be mounted:
mkdir -p /mnt/sharednfs

After creating this mount point, we can finally mount the repository from the NFS host onto the directory we just created on the client. Don’t forget to replace the host IP address with your host’s IP address.
mount 10.10.0.135:/sharednfs /mnt/sharednfs

To recap, our directory /sharednfs from the NFS host server with the IP address 10.10.0.135 should now also be accessible from /mnt/sharednfs on the client server with the IP address 10.10.0.34. To check that the NFS shared directory is in fact mounted, execute the following command to show available disk space on our machine:
df -h

The output of the previous command should show something like the following lines if the repository is properly mounted (the IP address may show as nfs-server instead):
Filesystem Size Used Avail Use% Mounted on
10.10.0.135:/sharednfs 20G 0G 20G 0% /mnt/sharednfs

You can also see only what NFS shares are mounted with the following command. This command will print line by line the shared repositories and their settings.
mount -t nfs

When using the second command to check NFS mounting, the output should be similar to this if correctly executed (the IP address may show as nfs-server instead):
10.10.0.135:/sharednfs on /mnt/sharednfs type nfs (rw,vers=4,addr=10.10.0.135,clientaddr=10.10.0.134)

As an additional step, we can add the NFS mounting to fstab to be done on boot for convenience. Open the fstab file:
nano /etc/fstab

Add the following line to the bottom of the above file, then save and close. You can choose your own options from the nfs man page using man nfs in order to customize how you want the mounting done on boot when specified in the fstab file.
nfs-server:/sharednfs /mnt/sharednfs nfs auto,noatime,nolock,bg,nfsvers=4,sec=krb5p,intr,tcp,actimeo=1800 0 0

Test Your NFS Setup

To ensure that our setup is not just mounted correctly, but also the files are sharing as expected between the host and client servers, create a file to test the process. On your NFS server, go to the NFS shared directory with cd /sharednfs and execute the following to create a typical “Hello world” file:
echo Hello world > hello.txt

Now, go to your client server, navigate to the shared directory, and check if the file is there:
cd /mnt/sharednfs
ls

You should see the “hello.txt” file in the repository list. You can even check its contents for fun:
cat hello.txt

Conclusion

After completing this guide, you are now able to utilize the power of NFS for your own setups and benefit from its central management advantages. However, you should note that while using NFS is quick and easy for accessing remote network systems, the protocol NFS uses is not encrypted and thus considerations should be taken such as routing over SSH or VPN in order to enhance security. If you found this tutorial for using NFS useful, feel free to share it with others who may be interested.

How to install Counter-Strike: GO server on Ubuntu 14

The fourth installment of the popular franchise Counter-Strike, called Counter-Strike: Global Offensive, is one of the few first person shooters available to Linux users. Accordingly, your Linux box is an excellent candidate to host a server for Counter-Strike: GO. With our guide, you can make installation a painless, error free process.

Getting started

Before proceeding with the steps in this guide, make sure you have the following:
• 1 Node (Cloud Server or Dedicated Server) running Ubuntu 14.
• Root access to the node.
• All system updates applied
• Tar, nano, and wget, which should be included in the base Ubuntu installation already
• lib32gcc1

Tutorial

Your first step is to make sure that your system is up to date.
apt-get update && apt-get upgrade -y

We’ve prepared the following commands for you to run on your Ubuntu 14.04 LTS server.
apt-get install nano wget tar lib32gcc1 -y

You will need to create a user for Steam.
adduser --disabled-login --disabled-password steam

Feel free to leave the info fields empty, as they aren’t necessary. At the end of the process, validate the information by pressing Y or Enter.

Next, we’ll proceed with the steps for installation. Enter the Steam command shell:

cd /home/steam/
su - steam
wget https://steamcdn-a.akamaihd.net/client/installer/steamcmd_linux.tar.gz
tar xvf steamcmd_linux.tar.gz
rm steamcmd_linux.tar.gz
./steamcmd.sh

Once you’re in the shell, issue this command:

login anonymous
force_install_dir ./csgo
app_update 740 validate

Now wait for the server software to download and be installed. You should see this message when everything’s done.

Update state (0x61) downloading, progress: 100.00 (15789214502 / 15789238956)
Success! App '740' fully installed.

All that remains is to type quit to exit the shell.

For WAN connections, Counter-Strike: GO will require a server token. (If you’re just running over LAN, this won’t be necessary.) Naturally, you’ll need to have a Steam account and own a copy of Counter-Strike:GO.

For more information on the Game Server Login Token, be sure to consult Valve’s Counter Strike: GO wiki. Here’s an extract of the pertinent information:

Registering Game Server Login Token

CS:GO game servers and GOTV relays not logged in to a persistent game server account with a Game Server Login Token (GSLT) will only allow clients to connect from the same LAN RFC1918 addresses (10.0.0.0-10.255.255.255,172.16.0.0-172.31.255.255,192.168.0.0-192.168.255.255). If your server has a public routable IP address and clients need to connect from outside the server LAN then you need to get registered for GSLT.

To create your GSLTs, visit the GSLT creation utility and follow the instructions here: http://steamcommunity.com/dev/managegameservers

Each GSLT is restricted for use on one dedicated server instance only, and should be passed on command line with +sv_setsteamaccount THISGSLTHERE. You can also use command line setting -net_port_try 1 to avoid instances from reusing the same GSLT by accident.

Every game server operating with your GSLT must comply with game server operation guidelines outlined here: http://blog.counter-strike.net/index.php/server_guidelines/

Follow this link: http://steamcommunity.com/dev/managegameservers and log in with the Steam account that has access to your copy of Counter Strike: GO.

You’ll see a form to fill in. Here’s the information that you’ll need to supply:
App ID: 730 (Be sure to type 730 here, and not 740 as you did in the Steam command line earlier!)
Memo: This is an optional description that you can use for your own purposes.

After validation, you’ll recieve your token.

Game Authentification token (GSLT) Last connection Memo
730 *********************************

Let’s move on to server configuration and your startup script. You will need to first create the server configuration file.
nano /home/steam/csgo/csgo/cfg/server.cfg

Open the file and add this content to it.

hostname "SERVER-HOSTNAME"
sv_password "SERVER-PASSWORD"
sv_timeout 60
rcon_password "RCON-PASSWORD"
mp_autoteambalance 1
mp_limitteams 1
writeid
writeip

hostname "SERVER-HOSTNAME" #This is the name of your server
sv_password "SERVER-PASSWORD" #This is your server password. You can leave it blank for no password.
rcon_password "RCON-PASSWORD" #This is your Rcon server password, if you'd like to administrate your server remotely via Rcon.

You’ll need to enter the following commands as your root user. Create a startup script that will automatically start Counter-Strike: GO if the server reboots. The script will be stored in this file:

nano /etc/init.d/csgo

Add the below content to the file, being sure to replace [YOUR_GSLT] in the script with your actual game server token.
!/bin/sh -e
DAEMON="/home/steam/csgo/srcds_run"
daemon_OPT="-game csgo -console -usercon +game_type 0 +game_mode 1 +mapgroup mg_active +map de_dust2 +sv_setsteamaccount [YOUR_GSLT] -net_port_try 1"
DAEMONUSER="steam"
daemon_NAME="srcds_run"
PATH="/sbin:/bin:/usr/sbin:/usr/bin"
test -x $DAEMON || exit 0
. /lib/lsb/init-functions
d_start () {
log_daemon_msg "Starting system $daemon_NAME Daemon"
start-stop-daemon --background --name $daemon_NAME --start --quiet --chuid $DAEMONUSER --exec $DAEMON -- $daemon_OPT
log_end_msg $?
}
d_stop () {
log_daemon_msg "Stopping system $daemon_NAME Daemon"
start-stop-daemon --name $daemon_NAME --stop --retry 5 --quiet --name $daemon_NAME
log_end_msg $?
}
case "$1" in
start|stop)
d_${1}
;;
restart|reload|force-reload)
d_stop
d_start
;;
force-stop)
d_stop
killall -q $daemon_NAME || true
sleep 2
killall -q -9 $daemon_NAME || true
;;
*)
echo "Usage: /etc/init.d/$daemon_NAME {start|stop|force-stop|restart|reload|force-reload|status}"
exit 1
;;
esac
exit 0

If you’re looking for more information about game mode and game settings, check out Valve’s wiki page on starting the server.

We’ll need to make the script executable.
chmod +x /etc/init.d/csgo

Then, enable your Counter-Strike: GO Server to start on boot.
update-rc.d csgo defaults

Start up the server.
/etc/init.d/csgo start

The final step is to edit your firewall so Counter-Strike: GO traffic can flow to and from your server. If you are using Ubuntu’s default firewall service, iptables, here is how to open your ports both for Steam services & the Counter-Strike: GO server.
iptables -A INPUT -p udp -m udp --sport 27000:27030 --dport 1025:65355 -j ACCEPT
iptables -A INPUT -p udp -m udp --sport 4380 --dport 1025:65355 -j ACCEPT

The ports that you’re allowing will carry this traffic:
• 27000 to 27015 inclusive: UDP Game client traffic
• 27015 to 27030 inclusive: UDP Typically Matchmaking and HLTV
• 4380 : UDP Steamworks P2P Networking and Steam Voice Chat

And if you’re using the iptables-services package, make sure to add this set of rules to your iptables file.
-A INPUT -p udp -m udp --sport 27000:27030 --dport 1025:65355 -j ACCEPT
-A INPUT -p udp -m udp --sport 4380 --dport 1025:65355 -j ACCEPT

Conclusion

Get ready to play Counter-Strike: GO on your newly launched server. Be sure to invite your friends, and to share this article with them if they’re interested in setting up their own server

How to install NodeJS and run node applications on Ubuntu 14

NodeJS is a single-threaded, asynchronous runtime for server-side JavaScript-based applications. By being lightweight, easy to install, and simple to scale, Node is gaining popularity as a fast and powerful platform for internet-facing apps, from API backends to microservices and beyond. Ubuntu 14.04 is a stable Linux distribution with years of support remaining. If you’re looking for a great platform for your next high-performance app, then it is hard to go wrong with the combination of Node and Ubuntu 14.04 LTS. In this guide, we’ll present two options for installing Node, and will also get you up and running with a simple app to show just how easy developing with this combination of technologies can be.

Getting Started

You’ll need the following in place before getting started with this guide:
• 1 Node (Cloud Server or Dedicated Server) running Ubuntu 14.
• Root access

You’ll be installing everything as root, though we recommend running Node applications under separate user accounts. If your app is compromised and is running with root access, the attacker will have complete access to your server.

Tutorial

Start by ensuring that your Ubuntu installation is up-to-date. Here we’ll update the package cache, and will apply all available bugfixes and security updates. You should perform this step regularly to keep your system updated and running securely.

apt-get update && apt-get upgrade -y

There are a few ways to go about installing Node. The easiest method involves installing the version shipped with Ubuntu’s default package repository. This will leave you with a stable version of Node, though the release available through Ubuntu may lag significantly behind the most recent official Node version.

apt-get install nodejs npm -y

Node is now installed, though unfortunately Debian and Ubuntu use “nodejs” as the official binary name. This is incompatible with how many developers expect Node to be installed. As such, many Node-based scripts will fail if you do not create a symlink from “node” to “nodejs.” We’ll do that now to improve compatibility.

ln -s `which nodejs` /usr/local/bin/node

Another more advanced way to install Node is via the Node Version Manager. NVM lets you pick from all available Node versions, install multiple versions simultaneously, and switch between them dynamically as needed.

Start by installing NVM’s dependencies. These are needed to compile any binary modules, and to build the Node releases themselves.

apt-get install build-essential libssl-dev

NVM ships an installation script that performs all necessary setup. Here we retrieve it from GitHub and execute it from the shell.

cd /root
wget https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh
bash install.sh

The installation process modified your shell startup scripts. To continue, you can either source them directly or log out and back in. This will update your path, in addition to setting up NVM’s necessary support for switching between Node versions.

We can now use NVM to get a list of every single Node version ever released.

nvm ls-remote

As you can see, you have quite a few versions from which to choose. At the time of this writing, the latest version is 6.2.2. We recommend choosing that, or whatever the most recent release is, unless your app or framework specifically requires something older.

(....)
v4.4.2
v4.4.3
v4.4.4
v4.4.5
v4.4.6
v4.4.7
v5.0.0
v5.1.0
v5.1.1
v5.2.0
v5.3.0
v5.4.0
v5.4.1
v5.5.0
v5.6.0
v5.7.0
v5.7.1
v5.8.0
v5.9.0
v5.9.1
v5.10.0
v5.10.1
v5.11.0
v5.11.1
v5.12.0
v6.0.0
v6.1.0
v6.2.0
v6.2.1
v6.2.2

Using a version of Node via NVM involves two steps. First you’ll need to install the desired version. This step will download or compile the necessary binaries, and must only be performed once per version of Node you are installing.

nvm install 6.2.2

Let’s make sure the installation succeeded by listing the versions of Node that NVM has installed.

nvm ls

You should see the following output:

-> v6.2.2
default -> 6.2.2 (-> v6.2.2)
node -> stable (-> v6.2.2) (default)
stable -> 6.2 (-> v6.2.2) (default)
iojs -> N/A (default)

The next step is activating the version you wish to use. Activating a release makes that version the default against which any Node commands are run.

nvm use 6.2.2

Having done this, let’s verify that the new version is active:

node -v
v6.2.2

Node is now installed. To show just how easy working with Node can be, we’ll next build a web server based on the popular Express web framework. Start by creating a directory to contain the application and its dependencies.

mkdir /home/nodeapp
cd /home/nodeapp

Now create the script itself.

nano nodetest.js

Your script should contain the following code:

!/usr/bin/env node
var express = require('express');
var app = express();
app.get('/', function (req, res) {
res.send('Globo.Tech says Hello!');
});
app.listen(6915, function () {
console.log('Basic NodeJS app listening on port 6915.');
});

Save and exit. Next we need to install the Express framework, along with all of its dependencies.

npm install express

Run your newly-created script as follows:

node /home/nodeapp/nodetest.js

If everything worked, you’ll see a message on the console indicating that the server is listening. If you don’t, ensure that the contents of your file matches the above code exactly.

node nodetest.js
Basic NodeJS app listening on port 6915.

Given that we have setup an environment path in the beginning of our script, we can make it executable to run it without specifying “node” in front of the script.

chmod +x /rhome/nodeapp/nodetest.js
./nodetest.js

To confirm that everything worked, visit the URL and port shown in the console message. If you see “Globo.Tech says Hello!” your NodeJS setup is working properly.

Conclusion

You now have a sophisticated development platform for building complex, scalable services and back-ends. If you know anyone needing to install Node on their own Ubuntu server, be sure to do them a favor by letting them know about this article.