Holy Clustered Cloudlets, bava!

The last month needed to be a deep dive into the Reclaim Cloud, and I think I have held up my end. I migrated quite a few old sites, namely bavaghost, this blog (a couple of times), ds106 (a couple of times), and ds106.club. I also spun up some new applications, such as Jitsi, Discourse, Azuracast, and Minio. It has been a really rewarding, if at times frustrating, month. I am learning a ton about containers, Docker, and power of virtualized environments. I’ve been able to experiment with different database types and environments for bavatuesdays and ds106 simply by cloning an entire stack and testing the changes, this is illustrated nicely by my last post about switching database types on ds106 and down-sizing this blog to a non-clustered WordPress instance. I followed up on that last night with moving ds106 to a non-clustered WordPress instance as well, and like this blog it’s running cleanly on a fraction of the CPU resources.

The Reclaim Cloud measures resources in a unit known as a cloudlet, which is a measure of CPU usage or, 128 MB per cloudlet. So 8 cloudlets is 1 GB of CPU, 16 cloudlets 2 GB, etc. Part of my experimentation over the last month was to explore WordPress clusters given this environment would be ideal for heavily trafficked WordPress sites. And while ds106 and bavatuesdays have a bit of traffic, they are not “high traffic” sites so reducing the regular cloudlet usage by roughly 60% translates into significant ongoing savings monthly.

While I can reserve up to 32 cloudlets, or 4 GB of CPU for my WordPress site to scale up at any time, I will only be charged for how much I use, which is on average 10 cloudlets. With the clustered setup I was being charged minimum25 cloudlets given it was powering a load balancer, 2 NGINX apps, 3 MySQL databases, a separate storage container, etc.

With the non-clustered environment base resource usage (cloudlets) is cut significantly because there are far fewer containers.This containerized LEMP environment with the MySQL database, NGINX app, and storage all wrapped into one is far simpler than the cluster, and while it won’t scale as seamlessly as the architecture in the first image, chances are it won’t need to. And even if it does get a spike, the container can scale another 22 cloudlets, or almost 3 GBs of CPU resources, before running into any limits.

So, as June comes to a close I have moved this blog around a bunch over the last 6 months: from cPanel-based shared hosting to Digital Ocean to Kinsta back to Digital Ocean and now to Reclaim Cloud. Digital Ocean was costing about $25 per month for a 4 GB server and Spaces instance, which is quite reasonable. Kinsta would have been $80 a month for container-based WordPress hosting, which was a bit rich for my blood. Running bavatuesdays on the Reclaim Cloud will cost roughly the same as Digital Ocean for a server that can scale up to 4 GB (although in practice it is only using 1-2 GBs of resource at most). And while there is no possible way we can pretend to compete with Digital Ocean on server costs, if we are able to keep pricing within the same ballpark that would be amazing!

bava in the cloud with clusters

http://reclaim.cloud

Last weekend I took a small step for the bava, but potentially a huge step for Reclaim Hosting. This modest blog was migrated (once again!) into a containerized stack in the cloud in ways we could only dream about 7 years ago. There is more to say about this, but I’m not sure now is the right time given there is terror in the streets of the US of A and the fascist-in-charge is declaring warfare on the people. What fresh hell is this?!  But, that said, I’ve been hiding from American for years now, and quality lockdown time in Italy can make all the difference. Nonethless, I find myself oscillating wildly between unfettered excitement about the possibilities of Reclaim and fear and loathing of our geo-political moment. As all the cool technologists say, I can’t go on, I’ll go on….

For anyone following along with my migrations since January, there have been 4 total. I migrated from one of Reclaim Hosting’s shared hosting server’s in early January because the bava was becoming an increasingly unpredictable neighbor. The HOA stepped in, it wasn’t pretty. So, it meant new digs for the bava, and I blogged my moved from cPanel to a Digital Ocean droplet that I spun up. I installed a LEMP environment, setup email, firewall, etc. I started with a fresh Centos 7.6 sever and set it up as a means to get more comfortable with my inner-sysadmin. It went pretty well, and costs me about $30 per month with weekly backups. But while doing a migration I discovered a container-based WordPress hosting service called Kinsta which piqued my interest, so I tried that out. But it started to get pricey, so I jumped back to Digital Ocean in April (that’s the third move) thinking that was my last.*

Imag

But a couple of weeks later I was strongly considering a fourth move to test out a new platform we’re working on, Reclaim Cloud, that would provide our community a virtualized container environment to fill a long-standing gap in our offerings to host a wide array of applications run in environments other than LAMP. I started with a quick migration of my test Ghost instance using the one-click installer for Ghost (yep, that’s right, a one-click installer for Ghost). After that it was a single export/import of content and copying over of some image files. As you can see from the screenshot above, while this Ghost was a one-click install, the server stack it runs on is made visible. The site has a load balancer, an NGINX application server, and a database which we can then scale or migrate to different data centers around the world.

In fact, geo-location at Reclaim for cloud-based apps will soon be a drop-down option. You can see the UK flag at the top of this one as hope springs eternal London will always be trEU. This was dead simple, especially given I was previously hosting my Ghost instance on a cPanel account which was non-trival to setup. So, feeling confident after just a few minutes on a Saturday, I spent last Sunday taking on the fourth (and hopefully final) migration of this blog to the Reclaim Cloud! I’ve become an old hand at this by now, so grabbing a database dump was dead simple, but I did run into an issue with using the rsync command to move files to the new server, but I’ll get to that shortly.

First, I had to setup a WordPress cluster that has a NGINX load balancer, 2 NGINX application servers, a Gallera cluster of 3 MariaDB databases, and a NFS file system. Each of these are within their own containers, pretty cool, no? But don’t be fooled, I didn’t set this up manually—though one could with some dragging and dropping—the Reclaim Cloud has a one-click WordPress Custer install that allows me to spin-up a high-performance WordPress instance, all of which are different layers of a containerized stack:

And like having my own VPS at Digital Ocean, I have SSH and SFTP access to each and every container (or node) in the stack.

In fact, the interface also allows access and the ability to edit files right from the web interface—a kind of cloud-based version of the File Manager in cPanel.

I needed SSH access to rsync files from Digital Ocean, but that is where I ran into my only real hiccup. My Digital Ocean server was refusing the connection because it was defaulting to a SSH key, and given the key on the Reclaim Cloud stack was not what it was looking for, I started to get confused. SSH keys can make my head spin, Tim explained it like this:

I never liked that ssh keys were both called keys. Better analogy would be “private key and public door”. You put your door on both servers but your laptop has the private key to both. But the key on your laptop is not on either server, they both only have the public door uploaded. On your laptop at ~/.ssh you have two files id_rsa and id_rsa.pub. The first is the key. Any computer including a server that needs to communicate over ssh without a password would need the key. And your old server was refusing password authentication and requiring a key.

That’s why Timmy rules, after that I enabled the prompting of an SSH server password when syncing between the Cloud and Digital Ocean using this guide. After that hiccup, I was in business. The last piece was mapping the domain bavatuesdays.com:

And issuing an SSL certificate through Let’s Encrypt:

It’s worth noting here that I am using Cloudflare for DNS, and once I pointed bavatuesdays.com to the new IP address and cleared the local hosts file on my laptop the site resolved cleanly with https and was showing secure. Mission accomplished. I was a cloud professional, I can do anything. THE BAVA REIGNS! I RULE!  Ya know, the usual crap from me.

But that was all before I was terribly humbled by trying to migrate ds106.us the following day. That was a 5-day ordeal that I will blog about directly, but until then—let me enjoy the triumph of a new, clustered day of seamless expansion of resources for my blog whenever resources run high.

I woke up to this email which is what the clustering is all about, I have bavatuesdays set to add another NGINX application server to the mix when resource on the existing two go over 50%. That’s the elasticity of the Cloud that got lost when anything not on your local machine was referred to as the cloud. A seamlessly scaling environment to meet the resource demands, but only costing you what you use like a utility was always the promise that most “cloud” VPS providers could not live up to. Once the resource spike was over I got an email telling me the additional NGINX node was spun down. I am digging this feature of the bava’s new home; I can sleep tight knowing the server Gremlins will be held at bay my the elastic bands of virtualized hardware.


*I worked out the costs of Digital Hosting vs Kinsta, and that was a big reason to leave Kinsta given the bava was running quite well in their environment.

N.B:  While writing this Tim was working on his own post and he found some dead image links on the bava as a result of my various moves, and with the following command I fixed a few of them 🙂
wp search-replace 'https://bavatuesdays.com/wp-content/uploads' 'https://bavatuesdays.com/wp-content/uploads'
….
Made 8865 replacements. Please remember to flush your persistent object cache with `wp cache flush`.

Bava moves to Kinsta, story at 11

It’s been surreal here in Northern Italy, and the last thing the world needs right now is another hot take on the Corona Virus or teaching online in the age of pandemics. My turn over the least 10 years has been to explore new (and old) web-based environments possible for teaching and learning, and frankly the syndicated, asynchronous and distributed learning environment sounds pretty good right about now. Throw in some radio, and it is near on perfect ?

But I profess and digress, but at least it’s not on Twitter. The point of this post is simply to chronicle my migration of this blog from Digital Ocean (DO) to Kinsta yesterday. I created the DO droplet back in January and documented the process (find the blog posts here, here, here, and here). I learn a ton from these projects and WordPress continues to be the tool I use and learn about the web through the lens of. I recognize the limitations therein, but that said I only have so much emotional labor to spare! So when I was doing a migration from Kinsta to Reclaim Hosting I became really intrigued by Kinsta’s model, to quickly re-iterate here they provide container-based WordPress instances, and their service is built on top of Google’s Cloud platform.

They provide what they call “premium” WordPress hosting, which comes at a price. At the lowest end of the spectrum it costs $30 per month, which is as much as a year’s hosting at Reclaim—and we even throw in a domain. But they aren’t really geared towards the same audience, they are positioned to serve folks how have a site that needs to scale resources seamlessly for both traffic spikes and quick growth. Like I said in my previous post, it reminds me of a dead-simple, elastic Amazon Web Services (AWS) EC-2 instance for those who don’t have the sysadmin chops but need to run a beefy, mission critical WordPress instance. But like AWS, resources come at a premium, but I’ll talk about that later on in this post.

For now let me focus on the migration and Kinsta’s stellar support. I actually tried to migrate the bava from my DO instance two days ago, but I ran into issues because my Kinsta container runs over port 51135, and I could not cleanly move a zipped up copy of my files between servers. The following was the command I used when logged into my DO server, but I kept getting connection errors. Below is a stripped version of the scp command I used

scp -P 51135 myusername@my.ip.address:/www/bavatuesdays/public /www/bava/html.zip

I jumped on chat support* and I was almost immediately answered by Ruby who told me there may be issues with my DO instance blocking port 51135, which turned out to be correct, I just was not smart enough to open it. Given the bava is almost 9 GB of files, SFTP is out of the questions given with my current upload speed it would take 12+ hours. Whereas a scp between servers takes literally minutes for a 9 GB zip file. I left things alone for the day as work at Reclaim started to gear up, but returned to it yesterday early with the idea of  actually moving the instance of bavatuesdays I had on Reclaim servers before migrating in January. This would have almost all the same files save anything uploaded after mid-January, which is an easy fix. I unblocked port 51135 on the old server and tried the scp command to the Kinsta container and it worked, 9 GB moved in 6 minutes. 

That was awesome, but when I tried unzipping the directory on Kinsta’s server I was getting disconnected from the server:

I jumped on the chat support, and Ruby once again bailed me out suggested I use the external Ip address for this rather than the internal given it is often more table. Boom, that worked. I was able to swap out the images I was missing since January, and my site was now on Kinsta. A few things I really appreciated was dead simple SSL cert and forcing of SSL through the tools panel:

After that, I tried upgrading to PHP 7.4, and that was dead simple too, all seemed to work, but the WordPress debugging tool showed me there was an issue with the Crayon Syntax Highlighter plugin for anything above PHP 7.2 (it was actually breaking any post with it embedded, which is annoying) so I reverted to 7.2 for now, but I should know better than to use plugins 4 years out of date. I am pointing my domain from my Reclaim cPanel, so no need for Kinsta’s DNS controls, but always interesting to see how they handle that:

Using Amazon Route 53, just like Reclaim, and I might have to add a domain to see how the controls look, and I do like the Gmail MX records radio button given that would, I imagine, pre-fill the records given they’re predictable, and be entirely out of the email game is a beautiful thing! 

Kinsta has built-in caching for sites (need to look more into the details behind that) and they also have a CDN tool, something I’ve never used on the bava, so I wanted to try that out to see if it speeds things up. Now, it is kinda of a joke to say that because speeding my site up means getting it to load under 4 seconds given I load images on heavy, and I never get a rating above D from Pingdom’s speed tester, but I am feeling the site is a bit snappier regardless ? 

So, I got caching, CDN loading, and the like. Now when I moved to Kinsta I was un-phased by there 20,000 unique hits limit for the $30 plan given I average about 100-200 daily hits on the bava according to Jetpack—I’m not as big in Japan as I once was ? But this morning when I checked the site recorded over 2200+ unique visits, even though Jetpack recorded 165. That’s a pretty big discrepancy.

What’s more I was transferring 2.5 GB of data in less than a day? Who knew?! At this rate I will hit my 20K visits limit in less than 10 days (versus the 30 I am allotted) bumping it up to $60 per month for 40,000 unique visits—and at this rate I would even hit more than that, pushing me into the $100 Business plan range. Yowzers! I was interested in where all the traffic was coming from, and it is bizarre, as you can imagine. All I can say to all you traffic hounds out there is make more GIFs! ?

My high-res Apocalypse Now GIF from 2011 was hit 56 times and required a whopping 755 MB of bandwidth.

God the bava is unsustainable! But even more surprising is the following image of the Baltimore Police Department putting guns and money on the table being hit over 1400 times in less than 24 hours! WTF! wire106 #4life

It is a strange world, but getting these insights from Kinsta’s analytics is kinda cool, and it reminds me that the bava is its own repository of weirdness outside the social media silos—“ah! how cheerfully we consign ourselves to perdition!” I still have to get my SSH keys set, which I discovered is possible…

Oh yeah, one more thing. I was also concerned about hitting my storage limits given my plan limits me to 10 GB, and when I did a df -h it looked like I was using 13GBs. 

I jumped on support again, this time with Salvador, and he also ruled—their support is super solid, which is always a good sign. He gave me a different command to run in www, namely…

du -h -d 4 

Which gave me what I needed, 9.2 GBs, just under the wire:

And now I need to find a way to offload some of the media serving given it will quickly make Kinsta prohibitive in terms of costs, but I have thoroughly enjoyed their dashboard, and the laser-like focus of  creating an entire hosted, optimized experience  and environment for one tool.


*Kinsta uses Intercom for online chat support, which is a tool Reclaim had for about a year or two in 2015 and 2016 I believe. We did chat support when it was Tim, myself, and Lauren, that was our team! It was hard, and the chat format invited folks to add 3 word issues like “My site broke” or “HELP me please!” Just the thing every support agent wants to see ? I was mindful of this and tried to be kind and give details and be patient, but the on-demand model can be rough. And I know folks are thinking of that as one way to imagine managing stuff online in times of crisis, but if Reclaim’s experience with chat is at all telling, resist the urge!  That said, Ruby and Salvador were there and helped and I appreciated it tremendously, so who knows. But my gut tells me if you have not done web hosting support for the last 10 years and are not prepared with definitive questions and have done your own troubleshooting you are in for a world of back-and-forth pain