Ubuntu 10.04 EC2 Guide (Apache / MySQL / PHP / Postfix / Dovecot)

EC2 Guide: Hosting a website on Amazon EC2 (1 / 7)

The Amazon Elastic Compute Cloud (Amazon EC2) is a web-service allowing companies or individuals to create / run / manage virtual servers (called instances) on their hardware infrastructure. It allows a choice of most operating systems and a range of virtual hardware configurations, so if you wanted lots of processing power you could choose a High CPU instance or for a database server you may choose a High Memory instance. You can of course change instance type with ease. To make this better it allows you to only pay for what you use by the hour and allows creation / destruction of new instances in seconds. It also offers extreme scalability, data storage and fault tolerance. In short it's a pretty good service!

Until relatively recently (Dec 2009) the instances were not persistent, that is any data you required on a server had to be (re-)created (by scripts) each time a machine was created / rebooted or crashed. This was, and still is, fine for compute intensive batch jobs such as photo processing or number crunching where you create a machine instance, pass it some data and it will give you back the output. That was (essentially) the end of the server's life. Obviously for a website this kind of arrangement is far from ideal. Thankfully Amazon addressed this issue and created a product called Elastic Block Store (EBS) for this task. A year on and many gremlins have been ironed out and this is now suitable for production website deployment.

Aims of this guide

This guide is not solely dedicated to creating an EBS backed Amazon EC2 instance, that would take a few seconds and wouldn't really help many people. It is instead designed to guide a relative novice through all the steps required to configure a production Ubuntu 10.04 Lucid Lynx (LTS) server running in the cloud environment. Much of this guide will work on other Linux flavours with certain changes, but I have only tested it on Ubuntu 10.04. Debian based systems will require relatively minor tweaks while RPM / Gentoo / Slackware based distros would require significant code updates (although conceptually it is similar). Likewise it is not required that your server is running in the cloud, but there is some cloud specific content in the guide.

Server Configuration and Scalability

With a small website it is likely that you will only need one server and the size of that will be dictated by your site traffic / server load. As a site grows and more demand is put on the server there eventually becomes a point where you have to make a change. There are a couple of options here:

Even if you have a site with low traffic it's always prudent to plan for the best and have an idea of what to do when the day comes where your server can no longer cope. Amazon EC2 allows an upgrade to a larger instance very quickly, but ultimately you run out of larger instances to use when you hit their Extra Large Instance and become stuck with migration difficulties.

Again Amazon EC2 offers a unique solution to this issue, their EBS storage. It doesn't just have to be for machine images, EBS volumes can be used essentially as detachable hard disks. This means that you can separate all core (config and data) files for the web-server (e.g. Apache / PHP, site data, vhosts etc), mail-server (e.g. Postfix / Dovecot, e-mails etc) and a database server (e.g. config and data) onto separate "drives" and if the situation ever arises where the server cannot cope you simply detach the required EBS volume from one instance and attach it to another making minimal configuration changes. You can of course copy EBS volumes quickly too and Amazon offers a load balancer if multiple web-servers are used.

Obviously I don't want to force this choice on anyone (because it does cost a few dollars a month extra) so I have set this up in such a way as it is completely optional to to this. Just follow the relevant section of the guide to your needs.

Requirements

EC2 Guide: Preparing required tools (2 / 7)

If you are using Amazon EC2 the following steps are necessary, but if not (i.e. you are just following this guide to set up a server) please skip this page and install Ubuntu 10.04 Lucid Lynx (LTS) from a DVD.

Register with Amazon AWS

I'm not going to go into detail for the actual registration of services, but you need a credit / debit card, a valid e-mail address and a phone(!). It takes a while to get all of it done / activated, and there are several services you can use, but for now only EC2 needs configuring.

There are several pieces of information that are required to manage your instances and they are configured / obtained in the Security Credentials section of your Amazon account. You will need to create and write down / save:

Install Firefox / Elasticfox

If you don't use Firefox then please download the latest version. Because Amazon haven't maintained Elasticfox for quite some time I no longer use the official one, instead I recommend you install this enhancement of Elasticfox which is a simple and bug free tool to manage all things AWS. To add it to Firefox:

Configure Elastic Fox

Launch Elasticfox by opening Firefox and selecting Tools -> Elasticfox - N.B. it is not under the Ad-ons section, it has it's own button. You should be prompted to enter your AWS credentials. You can enter new details or modify existing ones by clicking on the "Credentials" button in the top centre of the Elasticfox window:

You now need to create a Key Pair:

Download PuTTY / PuTTYgen (Windows only)

To manage your instance you will need and SSH tool and for Windows systems that basically means PuTTY, so please grab a copy, it's free (and legal in most countries - please check yours!).

The Key Pair file that you created earlier (mykey.pem) is sadly not compatible with PuTTY, but it does have a tool to convert it into a usable format called PuTTYgen. Download this too, run it, load in your key file (you might need to select "All Files" under the "Files of Type" dropdown) and press Generate. Save this new key (mykey.ppk) with your original (mykey.pem) - don't delete either!

Configure Elasticfox SSH

Launch Elasticfox and visit the Tools panel in the top right of the Elasticfox window.

Windows

Update the following:

Linux / OSX

Update the following:

Choose a Region and Availability Zone

These should be a reasonably easy choice, but still one that needs making. There are currently 15 main regions each with several availability zones:

Now, if you were using this in an enterprise environment you would probably want multiple servers in at least two of these regions so that if one malfunctioned the other could take over sole operation. However, given we are interested in a single server set up for now, it's best just to choose a region / zone nearest to you / your audience and stick with it. In Elasticfox just choose your option from the top left drop down menu.

Define Security Group(s)

A Security Group is a set of rules applied to an instance defining what ports are available to who. In general this boils down to 5 options per port, it is either: closed, open to the internal Amazon network, open to an IP of your choice, open to everyone or open to a combination of the last 3 options. It is easiest to understand with a couple of examples:

  1. Apache (the web-server) allows people to view web pages on port 80 (http://) or port 443 (https://) by default. It is likely that you want both of these options available to everybody, so you would set unrestricted access (0.0.0.0/0) to these two ports.
  2. SSH (the secure shell) allows you to connect to your server to manage it directly on port 22 by default. It is likely that you want this option only available to you on your IP only, so you would restrict access to only your IP address (18.232.99.123/32) on this port
  3. If you had a separate MySQL database server it would accept requests on port 3306 by default. You would need to let your web-server connect to it, but in this instance it would be wise to restrict access to only internal cloud machines (10.0.0.0/8). You could also allow your IP (18.232.99.123/32) direct access or of course allow everyone (0.0.0.0/0) if you had a non-cloud based server requiring access.

You only need one security group, but as mentioned previously there may come a time when you would like to separate your server into different parts. So I created 5 security groups which would allow a combo server set up (add all 5 groups) or individualised servers (with only the necessary groups added). My groups were:

Web-server

Protocol From Port / ICMP Type To Port / ICMP Code Source User: Group Source CIDR
tcp 80 80 0.0.0.0/0
tcp 443 443 0.0.0.0/0

Database Server

Protocol From Port / ICMP Type To Port / ICMP Code Source User: Group Source CIDR
tcp 3306 3306 10.0.0.0/8

E-mail Server

Protocol From Port / ICMP Type To Port / ICMP Code Source User: Group Source CIDR
tcp 25 25 0.0.0.0/0
tcp 110 110 0.0.0.0/0
tcp 143 143 0.0.0.0/0
tcp 465 465 0.0.0.0/0
tcp 993 993 0.0.0.0/0
tcp 995 995 0.0.0.0/0

SSH / SFTP / RDP

Protocol From Port / ICMP Type To Port / ICMP Code Source User: Group Source CIDR
tcp 22 22 18.232.99.123/32
tcp 3389 3389 18.232.99.123/32

FTP (optional)

Protocol From Port / ICMP Type To Port / ICMP Code Source User: Group Source CIDR
tcp 21 22 18.232.99.123/32

You can then load these in any combination you like to create servers fit for most purposes.

Assign and Whitelist an Elastic IP

Your machine will need an ip address (X.X.X.X) so that you can configure your zone files and point your domain to it. This is very easy to do in Elasticfox:

Until relatively recently Amazon had real trouble with ip blacklisting (especially bad for e-mails), but they now have a process in place which whitelists a new IP for you within a few days. For this to happen you need to fill out a use case request form to remove sending restrictions. It only takes a minute or two to inform them of your intentions and get your IP whitelisted.

Choose an Instance Size and AMI file

Before you can launch an instance you must first decide which type you need, there are many available and all have different costs associated with them so it's best not to be greedy! One key matter to consider is that some are 32 bit and some are 64 bit. This will dictate which AMI you choose to run.

Once you have made your choice you can visit Ubuntu's AMI list and choose the right one for you. Choose the correct (either 64 bit or 32 bit) EBS backed version for your chosen region and copy the AMI reference down.

Launch an AMI

Launching an AMI file is quite straightforward. In Elasticfox:

Associate your Elastic IP

Connect to your Instance

You should now see a terminal window open and be logged in to your server!

EC2 Guide: Core Software Installation (4 / 7)

The image that we are starting with is extremely minimal, and while it is configured to allow a rapid setup for a certain server type (e.g. a web server / mail server), it is often better to install exactly what you need on a machine and nothing more. This process will install everything needed to create a web-server / database server / e-mail server combo box. Of course you could install only the parts applicable to your server if it is a sole purpose box.

Install some basic apps/utilities

We need the following very basic utilities for a multitude of tasks so they go on first.

sudo aptitude install ssh openssh-server nano 

Set a Root User Password

By default the root account has no password, but because it is sometimes advantageous to run as root (not just sudo) it's a good idea to set one. Just don't forget it! (I use my IronKey for this kind of information storage)

sudo passwd 

Set an Ubuntu User Password

The default system user is called "ubuntu" and has a random password assigned to it. There are arguments for and against changing this, but since I use it as my default user and don't always want to count on having my AWS key file handy, I set this. Again, write it down somewhere safe.

sudo passwd ubuntu 

Create an FTP / SFTP User

It's often convenient to be able to move data onto a server (especially for a web server), so we'll add a user just for this.

sudo useradd ftp-user -d /home/ftp-user 
sudo passwd ftp-user 
sudo adduser ftp-user www-data 

Disable root SSH / allow SSH/SFTP with a password

There's no reason to allow the root user to log in via SSH (even if it is IP locked). We also want our other users to be able to log in with a password.

sudo nano /etc/ssh/sshd_config 

Uncomment / update the necessary lines:

[...]
PermitRootLogin no
[...]
PasswordAuthentication yes
[...]

Apply the changes:

sudo /etc/init.d/ssh restart 

Configure aptitude (package manager)

Remove the CD as a source and enable main, universe and multiverse support.

sudo nano /etc/apt/sources.list 
deb http://eu-west-2.ec2.archive.ubuntu.com/ubuntu/ lucid main universe multiverse
deb-src http://eu-west-2.ec2.archive.ubuntu.com/ubuntu/ lucid main universe multiverse
deb http://eu-west-2.ec2.archive.ubuntu.com/ubuntu/ lucid-updates main universe multiverse
deb-src http://eu-west-2.ec2.archive.ubuntu.com/ubuntu/ lucid-updates main universe multiverse
deb http://security.ubuntu.com/ubuntu lucid-security main universe multiverse
deb-src http://security.ubuntu.com/ubuntu lucid-security main universe multiverse
deb http://archive.canonical.com/ lucid partner

Then update the aptitude database:

sudo aptitude update 

Update all installed apps/utilities

sudo aptitude safe-upgrade 

IF a new Kernel has been added a reboot will be required.

sudo reboot 

Configure the network

We need to tell our machine what IP and domain we are using. Update your hosts file to look like this:

sudo nano /etc/hosts 
127.0.0.1       localhost.domain.com   localhost
X.X.X.X   subdomain.domain.com   subdomain

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
sudo hostname subdomain 
sudo nano /etc/hostname 
subdomain

Restart the service:

sudo /etc/init.d/hostname restart 

There are some errors with this service on Ubuntu 10.04 so don't pay too much attention to any message you receive. To check it has worked run:

sudo hostname 
sudo hostname -f 

First should read subdomain and the latter subdomain.domain.com.

Set timezone and synchronize the system clock

Always a good idea to set the server to your preferred time zone and have it synchronise itself. There are perhaps neater ways to do this, but this is reliable!

sudo rm -f /etc/localtime 
sudo ln -sf /usr/share/zoneinfo/Europe/London /etc/localtime 

Make sure our server keeps itself up to date.

sudo aptitude install ntp ntpdate 

Disable AppArmor

This is a security application similar to SELINUX but is often the cause of a lot of painful troubleshooting. It is not required to construct a secure system, anyone who tells you otherwise is wrong!

sudo service apparmor stop 
sudo update-rc.d -f apparmor remove 
sudo aptitude remove apparmor apparmor-utils 

Install MySQL

sudo aptitude install mysql-client mysql-server 
New password for the MySQL "root" user: (choose)
Repeat password for the MySQL "root" user: (repeat)

Check that it is working:

sudo netstat -tap | grep mysql 
tcp 0 0 localhost:mysql *:* LISTEN 7753/mysqld

Install Postfix, Dovecot and Saslauthd

These are all required for our e-mail server (along with MySQL above).

sudo aptitude install postfix postfix-mysql postfix-doc dovecot-common dovecot-postfix dovecot-imapd dovecot-pop3d libsasl2-2 libsasl2-modules libsasl2-modules-sql sasl2-bin libpam-mysql openssl telnet mailutils 

This should also create the root mysql user and set their password. This is for both Local and external.

General type of mail configuration: (internet site)
System mail name: (domain.com)

Install Amavisd-new, SpamAssassin, ClamAv (and related trimmings)

While not essential for an e-mail server, these anti-spam / anti-virus tools are a very, very good idea!

sudo aptitude install amavisd-new spamassassin clamav clamav-daemon zoo unzip bzip2 arj nomarch lzop cabextract apt-listchanges libnet-ldap-perl libauthen-sasl-perl clamav-docs daemon libio-string-perl libio-socket-ssl-perl libnet-ident-perl zip libnet-dns-perl libmail-spf-query-perl pyzor razor arj bzip2 cabextract cpio file gzip lha nomarch pax rar unrar unzip zip zoo rpm2cpio p7zip-full unrar-free ripole

Install PHP / Apache

These form the basis of most unix based webservers.

sudo aptitude install apache2 apache2.2-common apache2-doc apache2-mpm-prefork apache2-utils libexpat1 ssl-cert libapache2-mod-php5 php5 php5-common php5-gd php5-mysql php5-imap php5-cli php5-cgi libapache2-mod-fcgid php-pear php-auth php5-mcrypt mcrypt php5-curl php5-pspell php5-imagick imagemagick flite 

There are some errors in the following package comments (the wrong comment identifiers have been used)! Open each and update any comments starting with # (hash / pound) to begin with a ; (semi-colon).

sudo nano /etc/php5/cli/conf.d/imagick.ini 
sudo nano /etc/php5/cli/conf.d/imap.ini 
sudo nano /etc/php5/cli/conf.d/mcrypt.ini 

Configure the basic mods that Apache will use.

sudo a2enmod rewrite ssl actions include dav_fs dav auth_digest headers expires 
sudo a2dismod autoindex 

Apply these changes:

sudo service apache2 restart 

Install PHP Extensions (from PECL) (optional)

If you need to add any PECL extensions there are a few other requirements before installation can take place:

sudo aptitude install php5-dev make 

Once they are available to the system any PECL extensions can be added as follows (I'm using Mailparse and Sphinx as my example, you can use any you like!):

sudo pecl install mailparse, sphinx 

To enable them a new configuration file is required for each.

sudo nano /etc/php5/conf.d/mailparse.ini 
; configuration for php Mailparse module
extension=mailparse.so
sudo aptitude install sphinxsearch 
sudo nano /etc/php5/conf.d/sphinx.ini 
; configuration for php Sphinx module
extension=sphinx.so

And then restart Apache to apply the changes:

sudo service apache2 restart 

Update the locate database

The locate command can be extremely useful. (usage example: locate master.cf)

sudo updatedb

Install the EC2 API command line tools (cloud only)

In order to control the ec2 instances / volumes from the server itself (e.g. for backup crons) we need to add a few tools. Also if you are going to add any EBS volumes to the main EBS backed instance the tools for creation and management of the XFS filesystem are required.

sudo aptitude install sun-java6-jre ec2-api-tools xfslibs-dev xfsdump xfsprogs dmapi 

To use these tools the keys that are required must be available to the system. Create a directory to store these X.509 keys. As management will likely be through the ubuntu (default) user we'll store the keys in their directory.

mkdir /home/ubuntu/ec2-keys 
nano /home/ubuntu/ec2-keys/cert-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem 

Add your certificate data (copy / paste) to this file and save it.

nano /home/ubuntu/ec2-keys/pk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem 

Add your private key data (copy / paste) to this file and save it.

We can also add these keys (and other useful constants) to the global shell configuration files so that they are available to the terminal and login shells (called by crons).

sudo nano /etc/bash.bashrc 
[...]
# Basic convenience
alias ..='cd ..'
alias ...='cd ...'
alias cd..='cd ..'
alias cd-='cd -'
alias ls='ls -lh --color=auto'
alias dir='ls -lh --color=auto'
alias df="df -h"
alias h=history
alias targz="tar cvfz"
alias untargz="tar xvfz"
alias tarbz2="tar cvfj"
alias untarbz2="tar xvfj"
alias unrar="unrar x"
[...]
export EDITOR=nano
export EC2_CERT=/home/ubuntu/ec2-keys/cert-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem
export EC2_PRIVATE_KEY=/home/ubuntu/ec2-keys/pk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk/
export EC2_PRIVATE_IP="`wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4`"
export EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`"
export EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone`"
export EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
sudo nano /etc/profile 
[...]
# Basic convenience
alias ..='cd ..'
alias ...='cd ...'
alias cd..='cd ..'
alias cd-='cd -'
alias ls='ls -lh --color=auto'
alias dir='ls -lh --color=auto'
alias df="df -h"
alias h=history
alias targz="tar cvfz"
alias untargz="tar xvfz"
alias tarbz2="tar cvfj"
alias untarbz2="tar xvfj"
alias unrar="unrar x"
[...]
export EDITOR=nano
export EC2_CERT=/home/ubuntu/ec2-keys/cert-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem
export EC2_PRIVATE_KEY=/home/ubuntu/ec2-keys/pk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem
export JAVA_HOME=/usr/lib/jvm/java-6-openjdk/
export EC2_PRIVATE_IP="`wget -q -O - http://169.254.169.254/latest/meta-data/local-ipv4`"
export EC2_INSTANCE_ID="`wget -q -O - http://169.254.169.254/latest/meta-data/instance-id`"
export EC2_AVAIL_ZONE="`wget -q -O - http://169.254.169.254/latest/meta-data/placement/availability-zone`"
export EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Apply these changes.

rm /home/ubuntu/ec2-keys/*.pem~ 
source /etc/bash.bashrc 
source /etc/profile 
source ~/.bashrc 

Update the permissions on these files:

sudo chmod 0600 /home/ubuntu/ec2-keys/* 

At this point it is likely a good idea to disable your ability to (accidentally) terminate your server!

ec2-modify-instance-attribute $EC2_INSTANCE_ID --disable-api-termination true --region $EC2_REGION 

It should tell you that this has been actioned. If you wish to terminate your instance in the future you need to update this setting to false.

Install Subversion (optional)

I like to use Subversion as my version control system because there's just me, but you may prefer Git, Mercurial, Bazaar or many of the other alternatives. This is not required for the system to run!

sudo aptitude install subversion libapache2-svn 
sudo service apache2 restart 

Add a Remote Desktop system (optional / large)

This isn't something that I would bother with, but if it is absolutely required then FreeNX is excellent.

export DEBIAN_FRONTEND=noninteractive 
sudo aptitude install ubuntu-desktop 
sudo add-apt-repository ppa:freenx-team 
sudo aptitude update 
sudo aptitude install freenx 
cd /usr/lib/nx 
sudo wget http://launchpadlibrarian.net/47959420/nxsetup.tar.gz 
sudo tar xzvf nxsetup.tar.gz 
sudo ./nxsetup --install 

Download and launch the NX client and connect to the server via SSH using a user with a password login.

EC2 Guide: Attach existing EBS volumes (5b / 7)

This step assumes that you have already created all the EBS volumes following the exact steps in this guide, they just need attaching to the correct points and a few folders need to be removed / re-factored.

Attach the EBS Volumes

We are going to use the same configuration as during initial creation. So, in Elasticfox:

Then back on the server:

sudo mkdir -m 000 /webserver 
sudo mkdir -m 000 /database 
sudo mkdir -m 000 /email 

echo "/dev/sdc /webserver xfs noatime 0 0" | sudo tee -a /etc/fstab 
echo "/dev/sdd /database xfs noatime 0 0" | sudo tee -a /etc/fstab 
echo "/dev/sde /email xfs noatime 0 0" | sudo tee -a /etc/fstab 

sudo mount -a 

Make MySQL use the EBS volume data

This assumes we have an EBS volume with a mount point called "database" attached that contains all necessary data / permissions.

sudo service mysql stop 

Remove some existing MySQL parts so they can be mounted with those already available on the EBS volume.

sudo mv /etc/mysql /etc/mysql.ORIG 
sudo mv /var/lib/mysql /var/lib/mysql.ORIG 
sudo mv /var/log/mysql /var/log/mysql.ORIG 

sudo mkdir /etc/mysql /var/lib/mysql /var/log/mysql 
sudo chown mysql:mysql /var/lib/mysql 
sudo chmod 0700 /var/lib/mysql
sudo chown mysql:adm /var/log/mysql 
sudo chmod 0750 /var/log/mysql 
sudo chmod g+s /var/log/mysql 

echo "/database/etc/mysql /etc/mysql none bind" | sudo tee -a /etc/fstab 
echo "/database/data /var/lib/mysql none bind" | sudo tee -a /etc/fstab 
echo "/database/log /var/log/mysql none bind" | sudo tee -a /etc/fstab 

sudo mount -a 
sudo service mysql start 

Make Apache / PHP use the EBS volume data

This assumes we have an EBS volume with a mount point called "webserver" attached that contains all necessary data / permissions.

sudo service apache2 stop 

Remove some existing Apache parts so they can be mounted with those already available on the EBS volume.

sudo mv /etc/apache2 /etc/apache2.ORIG 
sudo mv /var/log/apache2 /var/log/apache2.ORIG  
sudo mv /var/www /var/www.ORIG 
sudo mv /etc/php5 /etc/php5.ORIG 

sudo mkdir /etc/apache2 /etc/apache2/vhosts /var/log/apache2 /var/www /etc/php5 
sudo chown root:adm /var/log/apache2 

echo "/webserver/apache2 /etc/apache2 none bind" | sudo tee -a /etc/fstab 
echo "/webserver/vhosts /etc/apache2/vhosts none bind" | sudo tee -a /etc/fstab 
echo "/webserver/log /var/log/apache2 none bind" | sudo tee -a /etc/fstab 
echo "/webserver/sites /var/www none bind" | sudo tee -a /etc/fstab 
echo "/webserver/php5 /etc/php5 none bind" | sudo tee -a /etc/fstab 

sudo mount -a 
sudo service apache2 start 

Make Postfix / Dovecot use the EBS volume data

This assumes we have an EBS volume with a mount point called "email" attached that contains all necessary data / permissions.

sudo service postfix stop 
sudo service dovecot stop 

Remove some existing Postfix / Dovecot parts so they can be mounted with those already available on the EBS volume. Because we are going to use a MySQL controlled, virtual mail setup we also need to create a new user and set some permissions.

sudo mkdir /home/ubuntu/smtp-certificates /var/log/dovecot /var/log/postfix /var/spool/vmail /home/vmail 

sudo groupadd -g 5000 vmail 
sudo useradd -g vmail -u 5000 vmail -d /home/vmail 
sudo chown vmail:vmail /home/vmail 

sudo chown vmail:adm /var/log/dovecot 
sudo chown syslog:adm /var/log/postfix 
sudo chown vmail:mail /var/spool/vmail 
sudo chmod 0775 /var/spool/vmail 

sudo mv /etc/postfix /etc/postfix.ORIG 
sudo mv /etc/dovecot /etc/dovecot.ORIG 
sudo mv /var/spool/postfix /var/spool/postfix.ORIG 

sudo mkdir /etc/postfix /etc/dovecot /var/spool/postfix 

echo "/email/postfix /etc/postfix none bind" | sudo tee -a /etc/fstab 
echo "/email/dovecot /etc/dovecot none bind" | sudo tee -a /etc/fstab 
echo "/email/spool/postfix /var/spool/postfix none bind" | sudo tee -a /etc/fstab 
echo "/email/spool/vmail /var/spool/vmail none bind" | sudo tee -a /etc/fstab 
echo "/email/vmail /home/vmail none bind" | sudo tee -a /etc/fstab 
echo "/email/smtp-certificates /home/ubuntu/smtp-certificates none bind" | sudo tee -a /etc/fstab 
echo "/email/log/dovecot /var/log/dovecot none bind" | sudo tee -a /etc/fstab 
echo "/email/log/postfix /var/log/postfix none bind" | sudo tee -a /etc/fstab 

sudo mount -a 
sudo service postfix start 
sudo service dovecot start 

Make Subversion use the EBS volume data (optional)

sudo mkdir /svn-repositories 

echo "/webserver/svn /svn-repositories none bind" | sudo tee -a /etc/fstab 

sudo mount -a 

EC2 Guide: Software Configuration from existing EBS volumes (6b / 7)

This page is designed for the re-attachment of existing EBS volumes previously created by completing this whole guide. If you do not have these 3 EBS volumes with all data configured on them you are on the wrong page and should use the contents below!

Configure ClamAV

ClamAV will be our anti-virus system and is very easy to set-up! We just need to update some permissions:

sudo adduser clamav amavis 
sudo adduser amavis clamav 

Configure SpamAssassin

SpamAssassin's function is given away by it's name. We will use this for spam filtering our e-mails.

sudo nano /etc/default/spamassassin 
[...]
ENABLED=1
[...]
CRON=1
[...]

We need to create a razor config file for it to work:

sudo razor-admin -create 

Start the service:

sudo service spamassassin start 

Configure Amavisd-New

Amavisd-New will be our interface between our mail transport program (Postfix) and our spam and virus filters (SpamAssassin / ClamAV).

sudo nano /etc/amavis/conf.d/15-content_filter_mode 

Uncomment the spam and anti-virus lines.

use strict;

# You can modify this file to re-enable SPAM checking through spamassassin
# and to re-enable antivirus checking.

#
# Default antivirus checking mode
# Uncomment the two lines below to enable it
#

@bypass_virus_checks_maps = (
   \%bypass_virus_checks, \@bypass_virus_checks_acl, \$bypass_virus_checks_re);

#
# Default SPAM checking mode
# Uncomment the two lines below to enable it
#

@bypass_spam_checks_maps = (
   \%bypass_spam_checks, \@bypass_spam_checks_acl, \$bypass_spam_checks_re);

1;  # insure a defined return

Some fairly key definitions are missing because they are not "free" - thanks Ubuntu. Enable / uncomment them!

sudo nano /etc/amavis/conf.d/01-debian 
[...]
$unfreeze   = ['unfreeze', 'freeze -d', 'melt', 'fcat'];
[...]
$lha    = 'lha';
[...]

Amavis needs permissions here:

sudo chmod -R 775 /var/lib/amavis/tmp 
sudo service amavis restart 

Configure Mail Logging

Because we mounted parts of our file system through EBS volumes we need to update the mail log locations.

sudo nano /etc/rsyslog.d/50-default.conf 
[...]
mail.*                          -/var/log/postfix/mail.log
[...]
mail.info                       -/var/log/postfix/mail.info
mail.warn                       -/var/log/postfix/mail.warn
mail.err                        /var/log/postfix/mail.err
[..]
sudo service rsyslog restart 

Restart Postfix / Dovecot

Make sure that Amavisd-New is plugged in correctly:

sudo service postfix restart 
sudo service dovecot restart 

EC2 Guide: Backing up and clean up (7 / 7)

Logs / Log rotation

Because we changed the paths to our mail logs we need to update our logrotate files. Only update those that have changed:

sudo nano /etc/logrotate.d/rsyslog 
/var/log/postfix/mail.info
/var/log/postfix/mail.warn
/var/log/postfix/mail.err
/var/log/postfix/mail.log
/var/log/dovecot/log
/var/log/dovecot/info
/var/log/dovecot/lda
/var/log/daemon.log
/var/log/kern.log
/var/log/auth.log
/var/log/user.log
/var/log/lpr.log
/var/log/cron.log
/var/log/debug
/var/log/messages
{
        rotate 4
        weekly
        missingok
        notifempty
        compress
        delaycompress
        sharedscripts
        postrotate
                reload rsyslog >/dev/null 2>&1 || true
        endscript
}

Clean up after ourselves:

sudo rm /etc/logrotate.d/rsyslog~ 

Installing a snapshotting tool (cloud only)

One of the great features of EC2 is the ability to take a snapshot of an entire EBS volume (irrespective of size) in about 3-10 seconds. Automating these calls gives some excellent backup potential. I recommend a tool called ec2-consistent-snapshot by Eric Hammond to assist with backups. This is very easy to get set up:

sudo add-apt-repository ppa:alestic 
sudo aptitude update 
sudo aptitude install ec2-consistent-snapshot 

Configure snapshotting (cloud only)

To make our commands simpler and avoid having config files all over the place we can set some environmental variables about our attached EBS volumes. To do this we need to obtain our attached volume IDs from Elasticfox:

Back on the server we will set these values into the environmental variables:

sudo nano /etc/bash.bashrc 
[...]
export VOLUME_ID_CORE=vol-XXXXXXXX
export VOLUME_ID_WEBSERVER=vol-XXXXXXXX
export VOLUME_ID_EMAIL=vol-XXXXXXXX
export VOLUME_ID_DATABASE=vol-XXXXXXXX
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
sudo nano /etc/profile 
[...]
export VOLUME_ID_CORE=vol-XXXXXXXX
export VOLUME_ID_WEBSERVER=vol-XXXXXXXX
export VOLUME_ID_EMAIL=vol-XXXXXXXX
export VOLUME_ID_DATABASE=vol-XXXXXXXX
export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games

Reload our shell variables:

source /etc/bash.bashrc 
source /etc/profile 
source ~/.bashrc 

We need to make bash script files to run our commands. Firstly we can backup the database volume, which needs to have the MySQL details passed so the snapshot can be taken safely (without losing data).

sudo mkdir /home/ubuntu/snapshotting 
sudo nano /home/ubuntu/snapshotting/database.sh 
#!/bin/bash -l

ec2-consistent-snapshot --description "Database Backup $(date +'%Y-%m-%d %H:%M:%S')" --region "$EC2_REGION" --mysql --mysql-host "localhost" --mysql-socket "/var/run/mysqld/mysqld.sock" --mysql-username "root" --mysql-password "root_db_password" --xfs-filesystem /database $VOLUME_ID_DATABASE

The webserver backup is similar, but does not require MySQL to be stopped:

sudo nano /home/ubuntu/snapshotting/webserver.sh 
#!/bin/bash -l

ec2-consistent-snapshot --description "Webserver Backup $(date +'%Y-%m-%d %H:%M:%S')" --region $EC2_REGION --xfs-filesystem /webserver $VOLUME_ID_WEBSERVER

The e-mail backup is almost identical to the webserver and does not require MySQL to be stopped:

sudo nano /home/ubuntu/snapshotting/email.sh 
#!/bin/bash -l

ec2-consistent-snapshot --description "E-mail Backup $(date +'%Y-%m-%d %H:%M:%S')" --region $EC2_REGION --xfs-filesystem /email $VOLUME_ID_EMAIL

These files need to be executable, and we should clean up any temporary files our editor may have left.

sudo chmod +x /home/ubuntu/snapshotting/* 
sudo rm /home/ubuntu/snapshotting/*~ 

In order to freeze / unfreeze the XFS file-system the actual commands need to run as root so we will add our commands to root's crontab.

su root 
crontab -e 
0 6 * * * /home/ubuntu/snapshotting/database.sh >> /home/ubuntu/backup.log 2>&1
1 6 * * * /home/ubuntu/snapshotting/webserver.sh >> /home/ubuntu/backup.log 2>&1
2 6 * * * /home/ubuntu/snapshotting/email.sh >> /home/ubuntu/backup.log 2>&1

Leave the root user.

exit 

Cleaning up old snapshots (cloud only)

Amazon don't seem to have a tool to do this neatly, and with a bit of research I found several built in PHP (a language I know) so I decided to roll my own based on a few of these. It's only a rough draft, but meets my current needs.

sudo nano /var/www/remove_old_snapshots.php 
#!/usr/bin/php
<?php
# Options may be hard coded here or passed in at run time
$ec2_private_key	= null; # '/home/ubuntu/pk-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem';
$ec2_cert			= null; # '/home/ubuntu/cert-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.pem';
$regions			= null; # array('eu-west-1', 'us-east-1'); # OR # 'eu-west-1';
$volumes			= null; # array('vol-XXXXXXXX', 'vol-XXXXXXXX'); # OR # 'vol-XXXXXXXX';
$number_to_keep		= null; # array(7, 1); # OR # 7;
$min_number_to_keep	= 1;	# Prevent any accidents

# Optional list of snapshot IDs to omit from this purge process
$omit = array(); # array('snap-XXXXXXXX', 'snap-XXXXXXXX'); # OR # 'snap-XXXXXXXX';

$options = getopt('K:C:v:n:r:o:sd:ed:');
 
if(!isset($options['v']) OR is_null($options['v']))
{
	if(is_null($volumes))
	{
		echo "ERROR: Please provide at least one volume to check (-v option)\n";
		exit;
	}
	else
	{
		$options['v'] = $volumes;
	}
}
 
if(!isset($options['n']) OR is_null($options['n']))
{
	if(is_null($number_to_keep))
	{
		echo "ERROR: Please provide number of snapshots to keep (-n option)\n";
		exit;
	}
	else
	{
		$options['n'] = $number_to_keep;
	}
}
 
if(!isset($options['r']) OR is_null($options['r']))
{
	if(is_null($regions))
	{
		echo "ERROR: Please provide a region for these volumes (-n option)\n";
		exit;
	}
	else
	{
		$options['r'] = $regions;
	}
}
 
if(!isset($options['K']) OR is_null($options['K']))
{
	if(is_null($ec2_private_key))
	{
		echo "ERROR: Please provide an EC2 key file location for these volumes (-K option)\n";
		exit;
	}
	else
	{
		$options['K'] = $ec2_private_key;
	}
}
 
if(!isset($options['C']) OR is_null($options['C']))
{
	if(is_null($ec2_cert))
	{
		echo "ERROR: Please provide an EC2 Cert file location for these volumes (-C option)\n";
		exit;
	}
	else
	{
		$options['C'] = $ec2_cert;
	}
}
 
if(!isset($options['o']) OR is_null($options['o']))
{
	$options['o'] = $omit;
}
 
if(!is_array($options['v']))
{
	$volumes = array($options['v']);
}
else
{
	$volumes = $options['v'];
}
 
if(!is_array($options['n']))
{
	$number_to_keep = array($options['n']);
}
else
{
	$number_to_keep = $options['n'];
}
 
if(!is_array($options['r']))
{
	$regions = array($options['r']);
}
else
{
	$regions = $options['r'];
}
 
if(!is_array($options['K']))
{
	$ec2_private_key = $options['K'];
}
else
{
	echo "ERROR: Please provide a SINGLE EC2 key file location for these volumes (-K option)\n";
	exit;
}
 
if(!is_array($options['C']))
{
	$ec2_cert = $options['C'];
}
else
{
	echo "ERROR: Please provide a SINGLE EC2 Cert file location for these volumes (-C option)\n";
	exit;
}
 
if(!is_array($options['o']))
{
	$omit = array($options['o']);
}
else
{
	$omit = $options['o'];
}
 
foreach($number_to_keep as $key => $value)
{
	if($value < $min_number_to_keep)
	{
		$number_to_keep[$key] = $value;
	}
}
 
$total_volumes	= count($volumes);
$total_kept		= count($number_to_keep);
$total_regions	= count($regions);
 
if($total_volumes != $total_kept)
{
	if($total_volumes > $total_kept AND $total_kept == 1)
	{
		# Use the number to keep as a global value
		foreach($volumes as $key => $value)
		{
			$number_to_keep[$key] = $number_to_keep[0];
		}
	}
	else
	{
		# An error is likely, we should stop
		echo "ERROR: Please check the values entered, there is an argument mismatch (-v vs. -n)\n";
		exit;
	}
}
 
if($total_volumes != $total_regions)
{
	if($total_volumes > $total_regions AND $total_regions == 1)
	{
		# Use the region as a global value
		foreach($volumes as $key => $value)
		{
			$regions[$key] = $regions[0];
		}
	}
	else
	{
		# An error is likely, we should stop
		echo "ERROR: Please check the values entered, there is an argument mismatch (-v vs. -r)\n";
		exit;
	}
}
 
foreach($regions as $key => $region)
{
	$process[$region][$volumes[$key]] = $number_to_keep[$key];	
}
unset($number_to_keep, $regions);
 
$raw = array();
foreach($process as $region => $details)
{
	$tmp = array();
	$cmd = 'ec2-describe-snapshots -K '.$ec2_private_key.' -C '.$ec2_cert.' --region '.$region;
	exec($cmd, $tmp);
 
	if(is_array($tmp))
	{
		foreach($tmp as $key => $value)
		{
			$tmp[$key] .= "\t".$ec2_private_key;
			$tmp[$key] .= "\t".$ec2_cert;
			$tmp[$key] .= "\t".$region;
		}
 
		$raw = array_merge($raw, $tmp);
	}
}
 
$found = array();
foreach($raw as $snapshot)
{
	$tmp = split("\t", $snapshot);
	if(in_array($tmp[2], $volumes) AND !in_array($tmp[1], $omit))
	{
		$tmp[] = $process[$tmp[11]][$tmp[2]];
		$found[$tmp[2]][strtotime($tmp[4])] = $tmp;
	}
}
 
foreach($found as $volume_id => $snapshots)
{
	krsort($found[$volume_id]);
}
 
unset($volumes, $process);
 
foreach($found as $volume_id => $snapshots)
{
	$total = 0;
	foreach($snapshots as $timestamp => $details)
	{
		if($total > $details[12] - 1)
		{
			echo 'Deleting Snapshot: '.$details[1]."\n";
			$out = array();
			$cmd = 'ec2-delete-snapshot -K '.$details[9].' -C '.$details[10].' --region '.$details[11].' '.$details[1];
			exec($cmd, $out);
 
			if(count($out) == 1 AND strcmp($out[0], 'SNAPSHOT	'.$details[1]) == 0)
			{
				echo "COMPLETE!\n";
			}
			else
			{
				echo "ERROR:\n";
				foreach($out as $line)
				{
					echo $line."\n";
				}
			}
		}
 
		$total++;
	}
}

We need to trigger this as a CLI process so we will wrap its call in a bash wrapper just like the other scripts.

sudo nano /home/ubuntu/snapshotting/remove-old.sh 
#!/bin/bash -l

php /var/www/remove_old_snapshots.php -v $VOLUME_ID_WEBSERVER -v $VOLUME_ID_EMAIL -v $VOLUME_ID_DATABASE -n 7 -r $EC2_REGION -K $EC2_PRIVATE_KEY -C $EC2_CERT

The -n value is the number of snapshots of each volume to keep. If you would like to keep different numbers for each specify -n 3 times (one for each). Again this needs to be executable and our editor may have left a temporary file to clean up:

sudo chmod +x /home/ubuntu/snapshotting/* 
sudo rm /home/ubuntu/snapshotting/*~ 

To remain consistent we will add our commands to root's crontab.

su root 
crontab -e 
[...]
3 6 * * * /home/ubuntu/snapshotting/remove-old.sh >> /home/ubuntu/backup.log 2>&1

Leave the root user.

exit 

Hopefully that is everything you will need to do to set up a secure server, so I wish you the best of luck with yours! All comments, criticisms and suggestions welcome, but please comment on the relevant pages to make them as useful as possible, thanks!