bava in the cloud with clusters

http://reclaim.cloud

Last weekend I took a small step for the bava, but potentially a huge step for Reclaim Hosting. This modest blog was migrated (once again!) into a containerized stack in the cloud in ways we could only dream about 7 years ago. There is more to say about this, but I’m not sure now is the right time given there is terror in the streets of the US of A and the fascist-in-charge is declaring warfare on the people. What fresh hell is this?!  But, that said, I’ve been hiding from American for years now, and quality lockdown time in Italy can make all the difference. Nonethless, I find myself oscillating wildly between unfettered excitement about the possibilities of Reclaim and fear and loathing of our geo-political moment. As all the cool technologists say, I can’t go on, I’ll go on….

For anyone following along with my migrations since January, there have been 4 total. I migrated from one of Reclaim Hosting’s shared hosting server’s in early January because the bava was becoming an increasingly unpredictable neighbor. The HOA stepped in, it wasn’t pretty. So, it meant new digs for the bava, and I blogged my moved from cPanel to a Digital Ocean droplet that I spun up. I installed a LEMP environment, setup email, firewall, etc. I started with a fresh Centos 7.6 sever and set it up as a means to get more comfortable with my inner-sysadmin. It went pretty well, and costs me about $30 per month with weekly backups. But while doing a migration I discovered a container-based WordPress hosting service called Kinsta which piqued my interest, so I tried that out. But it started to get pricey, so I jumped back to Digital Ocean in April (that’s the third move) thinking that was my last.*

Imag

But a couple of weeks later I was strongly considering a fourth move to test out a new platform we’re working on, Reclaim Cloud, that would provide our community a virtualized container environment to fill a long-standing gap in our offerings to host a wide array of applications run in environments other than LAMP. I started with a quick migration of my test Ghost instance using the one-click installer for Ghost (yep, that’s right, a one-click installer for Ghost). After that it was a single export/import of content and copying over of some image files. As you can see from the screenshot above, while this Ghost was a one-click install, the server stack it runs on is made visible. The site has a load balancer, an NGINX application server, and a database which we can then scale or migrate to different data centers around the world.

In fact, geo-location at Reclaim for cloud-based apps will soon be a drop-down option. You can see the UK flag at the top of this one as hope springs eternal London will always be trEU. This was dead simple, especially given I was previously hosting my Ghost instance on a cPanel account which was non-trival to setup. So, feeling confident after just a few minutes on a Saturday, I spent last Sunday taking on the fourth (and hopefully final) migration of this blog to the Reclaim Cloud! I’ve become an old hand at this by now, so grabbing a database dump was dead simple, but I did run into an issue with using the rsync command to move files to the new server, but I’ll get to that shortly.

First, I had to setup a WordPress cluster that has a NGINX load balancer, 2 NGINX application servers, a Gallera cluster of 3 MariaDB databases, and a NFS file system. Each of these are within their own containers, pretty cool, no? But don’t be fooled, I didn’t set this up manually—though one could with some dragging and dropping—the Reclaim Cloud has a one-click WordPress Custer install that allows me to spin-up a high-performance WordPress instance, all of which are different layers of a containerized stack:

And like having my own VPS at Digital Ocean, I have SSH and SFTP access to each and every container (or node) in the stack.

In fact, the interface also allows access and the ability to edit files right from the web interface—a kind of cloud-based version of the File Manager in cPanel.

I needed SSH access to rsync files from Digital Ocean, but that is where I ran into my only real hiccup. My Digital Ocean server was refusing the connection because it was defaulting to a SSH key, and given the key on the Reclaim Cloud stack was not what it was looking for, I started to get confused. SSH keys can make my head spin, Tim explained it like this:

I never liked that ssh keys were both called keys. Better analogy would be “private key and public door”. You put your door on both servers but your laptop has the private key to both. But the key on your laptop is not on either server, they both only have the public door uploaded. On your laptop at ~/.ssh you have two files id_rsa and id_rsa.pub. The first is the key. Any computer including a server that needs to communicate over ssh without a password would need the key. And your old server was refusing password authentication and requiring a key.

That’s why Timmy rules, after that I enabled the prompting of an SSH server password when syncing between the Cloud and Digital Ocean using this guide. After that hiccup, I was in business. The last piece was mapping the domain bavatuesdays.com:

And issuing an SSL certificate through Let’s Encrypt:

It’s worth noting here that I am using Cloudflare for DNS, and once I pointed bavatuesdays.com to the new IP address and cleared the local hosts file on my laptop the site resolved cleanly with https and was showing secure. Mission accomplished. I was a cloud professional, I can do anything. THE BAVA REIGNS! I RULE!  Ya know, the usual crap from me.

But that was all before I was terribly humbled by trying to migrate ds106.us the following day. That was a 5-day ordeal that I will blog about directly, but until then—let me enjoy the triumph of a new, clustered day of seamless expansion of resources for my blog whenever resources run high.

I woke up to this email which is what the clustering is all about, I have bavatuesdays set to add another NGINX application server to the mix when resource on the existing two go over 50%. That’s the elasticity of the Cloud that got lost when anything not on your local machine was referred to as the cloud. A seamlessly scaling environment to meet the resource demands, but only costing you what you use like a utility was always the promise that most “cloud” VPS providers could not live up to. Once the resource spike was over I got an email telling me the additional NGINX node was spun down. I am digging this feature of the bava’s new home; I can sleep tight knowing the server Gremlins will be held at bay my the elastic bands of virtualized hardware.


*I worked out the costs of Digital Hosting vs Kinsta, and that was a big reason to leave Kinsta given the bava was running quite well in their environment.

N.B:  While writing this Tim was working on his own post and he found some dead image links on the bava as a result of my various moves, and with the following command I fixed a few of them 🙂
wp search-replace 'https://bavatuesdays.com/wp-content/uploads' 'https://bavatuesdays.com/wp-content/uploads'
….
Made 8865 replacements. Please remember to flush your persistent object cache with `wp cache flush`.

A Postscript on Server Migrations (redirecting network traffic to a new IP)

During the break between Christmas and New Year’s I migrated a server from Linode to Digital Ocean. We have just a handful left, and most of those should be gone this year. This migration was pretty straightforward, no WordPress portal or WHMCS instance, just a straight-up cPanel server. The plan was to run our handy dandy server deploy script which gets about 95% of a new cPanel server setup in about 30 minutes, which is amazing given this use to be a day-long process. Once that server is setup we need to copy all data between the two servers using IP addresses given we want to keep the same hostname, i.e., universityx.reclaimhosting.com. This is easily done with the the Transfer tool in cPanel, and migrating over 500 cPanel accounts took about an hour and a half. 

Once all the accounts are migrated over cleanly, we need to point the DNS records in AWS’s Route 53 to the new IP address of the new server on Digital Ocean. If all went well that should be all set, the one mistake I made on this recent migration was not copying over the existing SSL certificate from the old server—it’s always something. So, after that’s done another trick Tim showed me that has come in useful was redirecting all traffic to the old IP to the new IP server-wide. This post spells it out very well, and it ensures that any lingering traffic that may be going to old server for all kinds of DNS reasons would be pushed to the new server right away. 

https://www.debuntu.org/how-to-redirecting-network-traffic-to-a-new-ip-using-iptables/

Anyway, just putting this here in the event I need this again so I don’t have to dig through Slack again to find the link, not to mention to remind myself of the mistakes I made the last time so I can avoid them next time ?

Restarting a Discourse Container

We have a server that runs a kind of multisite Discourse environment that I discussed a number of years ago in this post. It is an Ubuntu server with Docker installed, and each of the Discourse instances on that server are spun up in Docker containers. It’s a very small, experimental part of what we do. In fact, we discontinued offering Discourse and Ghost in this kind of environment  a while back, and are far more interested in options like Cloudron, which makes hosting Ghost a breeze. That said, we have a couple of Discourse instances we still host and today the biggest one went down, which is always a bit of a scare for me given it is a unique environment. So, this post is simply going to retrace my steps in terminal to fix this because I always forget given it is not something I do often enough.

When I learned the server was down I figured I would try stopping and restarting the Container to see if that works. To do that I needed to go to var/discourse:

cd /var/discourse

From there, I tried to stop the container (to find the container name I looked in the /var/discourse/containers/ directory which has all the YAML files for each install, and the container names are everything before the .yml extension.

./launcher stop containername

That will stop the container and the following will restart it:

./launcher start containername

But when I went to stop the container I got the a storage full error, and when I ran a

df -h

on the server it was confirmed, the disk was full. I then proceeded to run the trusty NCDU command to get a sense of what was taking up all the space, and I have a suspicion it might be related to this overlay2 storage space issue others have complained about with Docker, but I took the easy route and deleted 10 GBs of old backups for the site and it was immediately back up and running. In the end a restart was not necessary, and I was able to solve a fairly random issue fairly quickly. 

Whoowns or Whoowens?

One of the cPanel scripts I’ve found really useful as of late is the whoowns script that let’s you know which account owns a specific domain. Let me provide a quick scenario.  You have an issue with a domain and you can’t figure out which account in lives in, which could mean it’s an addon domain that wasn’t registered through us, etc. Tracking it down can be a pain. You can figure out what server it is on by using a command like nslookup (nameserver lookeup) that will tell you the hostname and identify the server:

nslookup themissingdomain.tld

The above command will return something like beathap.reclaimhosting.com.  Which means the account is on the Beathap server, but given it is not the primary domain of an account it is not going to appear in the list of all cPanel account. And this is where I would get stuck.

But using whoowns will tell you the account owner, just log in via terminal and use the following command:

/scripts/whoowns themissingdomain.tld
themissi

That will tell you the account that domain lives in which means problem solved. A simple, useful script.

So, when extolling its virtues in Slack I wrote /scripts/whoowens —and soon after Tim had some fun and wrote his own script. So, when you run /script/whoowens on any of Reclaim’s servers you get the following:

That’s geeky and it’s awesome. Hosting humor #4life.

Sequel Pro’s SQL Inserts

Another tool I’ve  been becoming more familiar with for sites that don’t have phpMyAdmin to access the MySQL databases is Sequel Pro. It’s an open source application for managing SQL databases on the Mac.  I have come to appreciate it in newfound ways after the UNLV migration; it is to databases management what Transmit has been to moving around files via FTP.  Anyway, one think I discovered it can do is copy the structure of a database table, such as wp_users:

And then insert it as SQL code in something like PHPMyAdmin:

Sequel Pro does all SQL query structuring for me, which is awesome. Was a nice little bonus to discover, and another trick for the toolbox.

 

Digital Ocean’s One-Click Apps vs. Cloudron

Digital Ocean has been en fuego as of late. They announced a whole bunch of new droplet plans, and the price-point for all of them has gone down. This is very good news for Reclaim Hosting because it gives us some breathing room with our infrastructure costs allowing us to continue to keep costs low.  We have been slowly moving most of our infrastructure from Linode and ReliableSite to Digital Ocean, and we could not be happier. They are constantly improving their offerings, and being in a virtual environment where we can increase storage or scale CPU instantaneously makes our life (and our clients’) a lot easier.
One-click Apps at Digital Ocean

One-click Apps at Digital Ocean

In addition to new plans and pricing, I noticed they were featuring one-click apps as well (though not sure how new this is), and I took a peak to see what they offered. It was interesting to see that some of the application they featured, namely Discourse (the forum software) and Ghost (the blogging app), were apps Reclaim was offering beyond our shared hosting cPanel-based LAMP stack. Given we’ve been exploring a one-click option with Cloudron (I recently blogged about setting up Ghost using Cloudron) I wanted to compare Digital Ocean’s idea of one-click to Cloudron’s. Long story short, there is no comparison. Here is Digital Ocean’s command line interface for setting up Ghost:

Command line interface during Ghost setup on Digital Ocean’s one-click apps

Here is Cloudron’s:

One-click install of Ghost on Cloudron

Digital Ocean is amazing at what they do, but their idea of one-click installs still assumes a sysadmin level of knowledge, which, to be fair, make sense given they are a service designed for sysadmins. When I tried the Ghost app it was, indeed, installed on a droplet in seconds, but the actual configuration to setup required full-blown tutorial for command line editing the setup. In addition to the domain pointing, this was setting up SSL and Nginx, granted that simply meant typing “yes” or “no” and clicking enter, but even when you did the setup was not guaranteed. After following the tutorial to the letter I still got the Nginx 502 bad gateway error, which means I was stuck.

Ghost 502 Bad Gateway Nginx Error

I could have tried to troubleshoot the 502 error, but at this point it was just a test and from my experience it was far from one-click.

Discourse example

I then tried the Discourse, and this was definitely easier than Ghost. It still required a tutorial, but that was primarily focused on setting up an SMTP account through Mailgun so the application could send email. After that, the setup was simple, but again the one-click setup process on Digital Ocean assumes an understanding of API-driven transactional email services like Mailgun or Sparkpost. Cloudron does not have a Discourse installer, so no real comparison there, but if it could manage the SMTP email setup in the background, I imagine it would be just as simple as their Ghost installer. I’m glad I explored Digital Ocean’s one-click application offerings because it confirms for me the potential power of tools like Cloudron that truly make it simple to install applications. Our community by and large will not be folks with sysadmin level knowledge, so integrating a solution that is truly one-click, avoiding DNS and command line editing,  would be essential.

NCDU-fu

Sometimes it feels to have some meager sysadmin competencies, such as knowing how to quickly identity where large files are in a particular hosting account. This issue comes up from time-to-time when someone discovers all their storage space has been eaten up, but they are not sure where and why.  Often this is a symptom of a larger problem, such as an error_log run out of control which suggests bad process for a particular application, etc. That was the case on a ticket this morning, and luckily I knew the NCDU command. What is NCDU, you ask?

Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don’t have an entire graphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed.

In other words, a script for a remote server that finds big files. You can install it on your server, and then run it by navigating in command line to the offending account, which on our cPanel servers would live at /home/offendingaccount and running the command NCDU. After that, it will list all the directories and their sizes, followed by files.  You can then locate the directory with the largest file usage, and then change to that directory and run NCDU again until you find the offending file. In the example this morning, it was a 6 GB error_log in a directory running WordPress, easy fix to clean out space, and a good heads up things in that account need to be checked for a bad plugins, theme, etc.

The life of a Reclaimer is always intense

Set UMW Blogs Right to Ludicrous Speed

UMW Blogs was feeling its age this Spring (it officially turned ten this month—crazy!) and we got a few reports from the folks at DTLT that performance was increasingly becoming an issue. Since 2014 the site had been hosted on AWS (Tim wrote-up the details of that move here) with a distributed setup storing uploads on S3, the databases on RDS, and running core files through an EC2 instance. That was a huge jump for us back then into the Cloud, and the performance jump was significant. AWS has proven quite stable over the last two years, but it’s never been cheap—running UMW Blogs back then cost $500+ a month, and I’m sure the prices haven’t dropped significantly since.

Continue reading “Set UMW Blogs Right to Ludicrous Speed”

A WHMCS Invalid Token Error and the glory of blogging

I woke up this morning to find that our WHMCS portal for Reclaim Hosting was having some issues. WHMCS is software that enables you to manage the business of cPanel, effectively provisioning, invoicing, billing, renewing, etc. without it people can’t sign-up for new accounts, pay their bill, or access their client area. They can still access their sites through theirdomain.com/cpanel, but they would need to use their SFTP credentials to login their, so it would get bad quick support wise. So, when I discovered the 503 Service Unavailable error I knew I needed to fix this immediately. It happened at both a good and bad time. Good because it was late night in North America, so the demand was not peak. bad because my Reclaim partner Tim Owens was fast asleep ? But, in fact, that might have also been good because I tend to lean on him for this stuff given I’m afraid to mess shit up.

Continue reading “A WHMCS Invalid Token Error and the glory of blogging”

Changing Storage Quota for cPanel Accounts

This is a quick and easy tutorial for changing storage space quotas on specific cPanel accounts, perfect for a rainy Sunday morning. I often get this question from someone managing a Domain of One’s Own initiative that needs to modify an account to allow for more storage space.

This process is done in WHM, which is basically the GUI interface for managing all the accounts on cPanel. Once logged in you do a quick find using the word “list” (no quotes) in the left upper hand corner. Then click “List Accounts” which will allow you to search for the account you need. You can search by the username or domain as demonstrated below.
Continue reading “Changing Storage Quota for cPanel Accounts”