Cheaper Pastures

Not long ago, I decided I was tired of using low cost hosting like bluehost.  I originally chose bluehost because it is very inexpensive.  Bluehost also comes with the endorsement of, but that probably has more to do with their generous affiliate program than anything else.  Heck, even I signed up for their affiliate program.  I gave them all of my personally identifying info in the form of a W-9, and they give me $65 if someone clicks on any of the links in this post and signs up for hosting.  Neat!

Personally, I found bluehost to be limited.  Their cheapest hosting plans suffer from poor performance, resulting in dismal page load times.  So, I decided to move my site to GCE, and set up my own hosting infrastructure with kubernetes at the same time.  Overall, my experience with GCE was very good.  However, they are too expensive.  The applications I host are just a hobby, and my hobby budget couldn’t keep up with Google’s prices for machines adequate to run kubernetes.  Maybe that will change if I manage to cash in on that sweet, sweet affiliate money from bluehost.

I settled on Linode because it is inexpensive, and because there are some good resources for deploying kubernetes on Linode like kahkhang/kube-linode.  By the way, Linode also has an affiliate program.  If someone clicks an affiliate link in this post and stays signed up for hosting for 90 days, I get $20 credited to my account.  Not quite as god as $65 I can spend any way I choose, but still nice. Naturally, lower cost means fewer features.  Things like backup and recovery strategies are up to you.  Right now, I am backing up the most critical parts of my applications manually.  Linode offers backups for $5/month, however they are filesystem backups rather than block level backups and I haven’t investigated whether or not the Linode backups are suitable for backing up cluster nodes.

To get started, I signed up for a Linode account, and got an api key.  After that, I followed the directions for kube-linode to provision my nodes.  Kube-linode took some trial and error.  A few things I found to be important:

  • Make a backup copy of your ~/.kube/config file.
  • I couldn’t get rook to install, so I skipped it and installed it manually.
  • Make a settings.env file before you start.
  • Linode machine types are numbers like 1, 2, 3.  But Linode’s plans on their website don’t mention the plan IDs.  Instead, the machines are numbered 2048, 4096, etc.  In this case, 3 is equal to the 4096 plan.
  • kube-linode doesn’t have a lot of output.  I wound up hacking in some logging to understand what problems I was encountering.
  • Skip all of the optional installs if you are having trouble or don’t think you will need them.
MASTER_PLAN=3              # The linode type for the master
WORKER_PLAN=3              # The Linode type for the workers
INSTALL_ROOK=false         # I skipped installing rook

When I used GCE, I deployed my blog with the helm wordpress chart.  This time I wanted to set up the configuration myself so that I can have a better understanding of how the deployment is configured, and so that I can have finder control over the configuration.  For example, this time I chose to use the alpine-fpm wordpress image along with the alpine nginx image to save on disk space and data transfer usage.  I chose nginx and php-fpm over Apache httpd in the hopes of having a smaller memory footprint.

Because I didn’t have rook configured, I created a volume using a hostPath for my wordpress and mysql deployments.  This is far from ideal, so I am using WP All-In-One Migration to take backups of my site.  Still, this is better than nothing.  If the pod restarts, it can rebind to the volumes shown below.

Instead of using Traefik, I exposed a nodePort with nginx listening on that port.  To do this, I used the public IP of one of my linode instances.  

The most difficult part of the migration was using WP All-In-One Migration to restore my site.  Everything looked it was coming together until I encountered a 413 (request entity too large) error while uploading my backup file.  I poured over my nginx.conf file making sure that I wasn’t doing anything to limit the request body size in some way.  I checked the client_max_body_size value, I updated the php.ini settings to make sure that the body size wasn’t too small.  Nothing seemed to be working.  Finally, I discovered that I was missing a newline character when setting the php-fpm upload_max_filesize and post_max_size values.  Properly set, it looks like this:

fastcgi_param PHP_VALUE "upload_max_filesize=64m \n post_max_size=64m";

You can see the configuration files for this project on github.