A few days ago I received a significant bill from Amazon Web Services. My reserved instance had expired and I was now paying the full price for my various server instances. It would have been great if Amazon explicitly sent an email to say as much but hey-ho.

I was running a fairly hefty r5-large instance having previously needed the additional memory allocation for some previous projects that I had been running. When those projects ended I was left with more resources than I needed. With it being renewal time I figured that it would be worth downgrading and saving the business some money.

AWS provides some great documentation and the process of changing an instance type is really simple. Unfortunately however I was unable to convert my instance to their newer r6g instances.

Comparing some EC2 instances, it seemed as though an r6g-medium instance was what I needed. Only 8GB of memory and 1 vCPU but also only $13/month if reserved for 3 years.

Looking at CloudWatch graphs for my previous instance the memory/CPU allowances were barely being touched so at this point it seems logical to go for something small and upgrade if necessary.

EC2 reserved instances are somewhat constraining in their standard format as you can't convert them when your needs change. That said a convertible instance costs $150 more, and there is no reason (at this point) to expect I'll need more resources. Expect the unexpected?

As I couldn't use the 'change instance type' functionality I had to create a new instance, manually transfer everything over, and make sure everything was working as expected. Ideally without any downtime.

This was an interesting exercise for other reasons. I wanted to see what exactly I needed on my server to run the companies products. Over the years I had installed all sorts of packages on the previous server, changed all sorts of configuration files and generally created a mess of a file system. I wanted to install everything that we needed, and keep track of it.

This is another post that I am posting for my own future reference. If it helps anyone else, that is a bonus. All things considered, I was surprised at how simple it was in the end.

  • Create the instance
  • Assign the appropriate security groups to the instance to allow access on ports 80/443 to users.
  • Assign the appropriate IAM roles to the server.
  • Install scripts to allow monitoring of memory/disk space usage as outlined here.
  • SSH into the server
  • Install packages
sudo apt-get update
sudo apt -y install software-properties-common
sudo add-apt-repository ppa:ondrej/php
sudo apt -y install php7.4
sudo apt -y install php7.4-phalcon
sudo apt-get install -y php7.4-{bcmath,bz2,intl,gd,mbstring,mysql,zip}
sudo systemctl disable --now apache2
sudo apt-get install nginx php7.4-fpm
systemctl status php7.4-fpm nginx
sudo apt install mariadb-server
sudo apt install nodejs
sudo apt install npm
  • Install npm packages for the background services related to company products
  • Make sure 35-phalcon.ini is in /etc/php/7.4/fpm/conf.d
  • sftp into the server and transfer nginx config files (put sites-available)
  • Change the nginx config files to use the correct php socket file /var/run/php/php7.4-fpm.sock
  • Create symlinks in sites-enabled
  • Comment out bind-address = 127.0.0.1 in /etc/mysql/mariadb.conf.d/50-server.cnf to allow external mysql connections.
  • Create mysql user
CREATE USER 'mac'@'localhost' IDENTIFIED BY 'password';
CREATE USER 'mac'@'%' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON . TO 'mac'@'%';
  • Copy /etc/systemd/system/running-background.service across.
systemctl daemon-reload
systemctl start running-background
systemctl enable running-background
  • Copy web root folders over for each product

NOTE: AWS default user with keyfile authentication doesn't provide root access. Therefore when transferring files across we call the subsystem as outlined here.

sftp -s "sudo /usr/lib/openssh/sftp-server" aws

  • Use a test domain to verify the setup/configuration. Check that PHP is correctly installed etc.
  • Test SSL setup and functionality

sudo certbot --nginx -d zua.co.uk -d www.zua.co.uk

  • Change Elastic IP to point to new server (instead of  the old one).

Note: Elastic IP is great. DNS propagation takes time and it is easy to make mistakes. This was the most important factor in minimising downtime - it meant i could set up the new server and then simply swap over the elastic IP to point to the new server.

  • Rather than transfer SSL certificates I just set up fresh ones
sudo certbot --nginx -d running.org -d www.running.org
sudo certbot --nginx -d trainingplan.com -d www.trainingplan.com
  • Install Ghost command line tool.

npm install ghost-cli@latest -g

  • Update node version. apt had installed version 6. Ghost requires v10+

url -sL https://deb.nodesource.com/setup_10.x | sudo -E bash
apt-get install -y nodejs

  • Upload a backup of my Ghost blogs mysql database
  • Install the current version of Ghost (so that the schemas match my backup)

ghost install 2.15.0

  • Update ghost version

ghost update

  • LetsEncrypt certbot doesnt setup redirects correctly for my products to only work on https:// non-www
  • Add this to the 443 server to redirect https://www to https://
if ($host = www.running.org) {
    return 301 https://running.org$request_uri;
}
  • Manually do SSL certificates for the Ghost blog. This will recreate certs for both www and non-www (ghost-cli seems to only create it for the non-www).

sudo certbot --nginx -d thomasclowes.com -d www.thomasclowes.com

  • Add the same if condition to redirect www to non www to the thomasclowes.com-ssl.conf and thomasclowes.com.conf files. Ghost splits SSL and non-SSL config into 2 files.
  • Upload routes.yaml to Ghost Blog so custom routes like /instagram work as required
  • Start Ghost.
  • Check that services are all running as expected.
  • Monitor error logs for a while to check if everything is working as expected.
  • Profit !