Running Azuracast on Reclaim Cloud

Following-up on my last post about Reclaim Radio, here is how I got the web radio software Azuracast up and running. The process was made easy by the fact that Azuracast has instructions on how to self-install a Docker instance of their software. Even better, Reclaim Hosting now has Reclaim Cloud that just so happens to allow you to install Docker containers quite easily.

To begin with you will need to setup a Docker Engine on Reclaim Cloud by clicking on the downward-facing arrow next to the Docker tab:

After that select Docker Engine:

At the next prompt create the domain (reclaimradio.uk.reclaim.cloud), name the server (Reclaim Radio), and decide in what region you want the app to live (UK).

After the Docker Engine is created you can login to the web SSH tool and create the azuracast within the var directory:

mkdir -p /var/azuracast

After that, change directories:

cd /var/azuracast

From within the azurcast directory download their Docker Utility Script:

curl -fsSL https://raw.githubusercontent.com/AzuraCast/AzuraCast/master/docker.sh docker.sh

Set it as executable:

chmod a+x docker.sh

And then run the Docker installation process:
./docker.sh install

Soon after that the container with the Azuracast software will be up and running.

Keep in mind, you will want to make sure you define the domain as the custom domain you want the software to run in, in our example it is reclaimrad.io version reclaimradio.uk.reclaim.cloud. This is important when installing the Let’s Encrypt SSL certificate through the application.
./docker.sh letsencrypt-create reclaimrad.io

Finally, you will want to point an A record for the custom domain name to the container’s public IP address where ever you manage DNS, for this instance I was using the Zone Editor in cPanel for reclaimrad.io.

At this point you can login to Azuracast and start managing the application, here is a good guide to get you started on that journey.

Jitsi on Reclaim Cloud

Zoom’s privacy record has been spotty at best for a while now, but recent news pointing to their shutting down activist’s accounts at the behest of the Chinese government is yet another reason to think twice before using that video conferencing service. As Zoom has taken pole position in the edtech industry along side the learning management system as schools seem unable to imagine teaching asynchronously, the idea of an open source alternative to Zoom seems welcome. BigBlueButton is an existing option that folks at the OpenETC use and seem quite happy with. Another I’ve heard a lot about more recently is Jitsi Meet, and it just so happens that Reclaim Cloud has a one-click installer for Jitsi so I spun one up for myself.

Blurring the background for that 3-D motion effect

Not only is Jitsi encrypted end-to-end, but it is also as intuitive and seamless as Zoom. It allows screen sharing, in-app sharing of YouTube videos, chat, hand raising, and full screen or tile view.

There are also speaker stats for clocking who talked for how long, as well as bandwidth indicators for each participant in order to help identify where any connection issues are originating.

There are also integrations with other applications, such as for communal editing of documents in Etherpad or connecting your Google calendar:

Rooms you create on the fly can quickly be secured by the host with a password to prevent Zoom-bombing, and as host you can set these parameters much like in Zoom.

Over the past two weeks we have used Jitsi internally at Reclaim Hosting and it has been seamless. We’ve had no issues with groups of 7 or 8, and one-click install in Reclaim Cloud can support up to 75 users, but if more spaces are needed the instance can be vertically scaled.*

Also, it is worth noting I was able to map the instance on a custom domain, and I now have yet another tool within the complex of my Domain that I can use as needed. Pretty slick.

One thing that is not possible with Jitsi on Reclaim Cloud just yet is recording the sessions within the instance. That is something we are currently exploring, and once that is possible I will be hard pressed to see the advantages of Zoom over Jitsi in any regard.

__________________________________

*Jitsi scales resources up and down based on usage (think of scaling light using a light dimmer) which means you only pay for what you use. What’s more, you can also turn off the instance when it’s not in use to save even more on resource usage, which is true of any application on Reclaim Cloud. Even when idle applications like Jitsi use a certain amount of server resources (what are termed Cloudlets), so turning off the instance until next usage is like turning off the lights in a room you won’t be occupying for a while to save energy and money.

Discourse in the Reclaim Cloud

What’s old is new again. In 2015 I wrote about Reclaim Hosting experimenting with the next-generation forum software Discourse using a multi-user Docker setup. We use Discourse for Reclaim’s Community forums and I’ve grown to love the software.* What’s more, as that 2015 post notes, it epitomizes some of the challenges of running these next-generation apps on existing, affordable commodity web hosting: namely it runs Ruby using Nginx and requires transactional email. As Tim pointed out yesterday, I never take the easy road with this stuff. Discourse, even on the Reclaim Cloud, is not a simple, one-click install. But I know folks asked me about it, so I wanted to see what the process looks like, and then document it, which I have in the Reclaim Community Forums (which will not be available for another two weeks given we are still building this out, but I will paste in below). I am also working on a video tutorial for this install.

The following guide describes the process of setting up Discourse on Reclaim Cloud, but before I paste it below a note for my particular case that is not necessarily generalizable. When mapping the forum it seems the proxied A record for discourse.bavatuesdays.com through Cloudflare (where I manage DNS for bavatuesdays) was creating issues. I needed to turn that off for Domain mapping to work:

If you are not using Cloudflare this issue should not occur, but it frustrated me for quite a bit.

This guide will take you through installing the next-generation forum software Discourse on the Reclaim Cloud using the official Docker image they maintain.

To begin with you will need to setup a Docker Engine on Reclaim Cloud, by clicking on the downward-facing arrow next to the Docker tab:

After that select Docker Engine:

At the next prompt create the domain (bava-discourse.ca.reclaim.cloud), name the server (Bava Discourse), and decide in what region you want the app to live (blame Canada!).

At this point your Docker Engine server will be spun up, and once it is you can login via the web-based SSH window provided to install Discourse:
From the command line you can install Discourse (and the command line will follow) but before you do you need to ensure 1) you have a transactional email account working and 2) an A record pointed to the container’s public IP address if you plan on using a domain other than mydiscourse.us.reclaim.cloud. In this example we’ll be mapping Discourse to the URL discourse.bavatuesdays.com.

Transactional email services, like Mailgun and SparkMail, allows you to setup email sending and receiving for apps like Discourse. For this example we used Mailgun, and the crucial information you will need are the SMTP server address, SMTP port, SMTP username, and SMTP password. Along with the domain name, this is the information you will be prompted for when setting up Discourse, and if the email does not work (i.e. is not verified through your transactional email service) you will not be able to use the application.

This guide for setting up a new email domain using Mailgun could prove useful. But keep in mind there are more options.

Once that is done you can now begin installing Discourse. From the command line run the following commands:

git clone https://github.com/discourse/discourse_docker.git /var/discourse
cd /var/discourse

After that you can launch the Discourse setup tool:

./discourse-setup

At this point you will be prompted for domain, email, SMTP details, etc.

Hostname for your Discourse? [discourse.example.com]:
Email address for admin account(s)? [me@example.com,you@example.com]:
SMTP server address? [smtp.example.com]:
SMTP port? [587]:
SMTP user name? [user@example.com]:
SMTP password? [pa$word]:
Let's Encrypt account email? (ENTER to skip) [me@example.com]:

We recommend defaulting to port 587 and skipping Let’s Encrypt account email if you do not want to receive email about the built-in SSL certificate.

This will generate an app.yml configuration file on your behalf, and then starts the install which takes a few minutes. If you need to change these settings after installation, you can run ./discourse-setup again (it will re-use your previous values from the file) or edit /containers/app.yml manually with nano and then ./launcher rebuild app, otherwise your changes will not take effect.

Last thing is if you are using a domain other than that provided by Reclaim Cloud you will need to go to Settings–>Custom Domains and add the domain there. Additionally, the domain you are pointing needs to have an A record pointed at the Docker Engine container’s public IP address.

After that, go to the domain and Discourse should be installed and ready to setup.

__________________________

*We are now running Reclaim’s Discourse forum in the Reclaim Cloud quite seamlessly.

Archiving ds106 docs

Part of moving ds106 to a new server is making sure you don’t leave a trail of dead links in your wake. With great classes come great responsibility 🙂 I think I have the caching issues and some of the kinks worked out after the move, but one think I did want to make sure wasn’t lost in the move was the ds106 wiki, also known as ds106 docs. It was used through 2014, and while it wasn’t a huge part of the class design, for quite a while we used it to for  tech tutorials, syllabi, and other assorted resources. For example, I forgot about the detailed tutorial I created for an animated series of Dead Zone trading cards:

Or the equally detailed Creating Animated GIFs with MPEG Streamclip and GIMP tutorial.

I understand these resources are not all that useful anymore, but the internet preservationist in me wants them to live on. There are other resources such as various syllabi for classes over the years, such for Alan Levine‘s and Martha Burtis‘s Camp Magic MacGuffin syllabus from Summer 2012, or the syllabus for the ds106 Zone in the Summer of 2013. What I noticed going through my early syllabi for ds106 is they were all the same, they just started riffing on a different theme as the years went by, but the core remained. And while that seems logical, I really didn’t remember simply copying and pasting the basics and then building the theme and the prompts of the class on the blog and through the assignments. So, all this to say keeping the wiki was part of the deal of moving the site off shared hosting.

One thing you realize when moving sites is the value of using subdomains versus subdirectories, let me explain. The MediaWiki instance was installed at ds106.us/wiki rather than wiki.ds106.us. That might have had more to do with the WordPress Multisite being subdomains and my not knowing how to resolve redirects, but if the wiki was installed in a subdomain it would still be live right now (which is probably a bad idea regardless). But given I moved everything in ds106.us and the wildcard subdomains to the Reclaim Cloud, I would not be able to run MediaWiki within a subdirectory of ds106. Whereas subdomain can always be pointed elsewhere, subdirectories lock you into the server you are pointing the root domain to.

So, realizing this I need to a) get the wiki up and running temporarily so that I could then use Site Sucker to get a full HTML-based file backup of the site. This is great for archiving and also ensures that the wiki will not go down as application versions change, modules break, or spammers find a way in.* As you can se from the Site Sucker screenshot above, there are files both in ds106.us/docs and ds106.us/wiki because we used the article path function in MediaWiki to have all articles resolves as ds106.us/docs as opposed to ds106.us/wiki, which explains why the root ds106.us folder has both /docs and /wiki and both have part of the HTML archived files.

Another thing I did before archiving the MediaWiki instance (which I also have a full backup of) was update it from 1.19.xx to 1.33.xx. I had to replace the MonoBook theme, turn off the locked-out module, and adjust some other errors as a result of the update, but I was happy and relieved that it worked after a couple of hours and MediaWiki was now running a supported version on PHP 7.3 no less. Part of me still loves the promise and possibility of MediaWiki, but after wrangling with the documentation and the code it was a good reminder why it was never sustainable-the interface and editing was never made any easier and versioning issues made long-term maintenance onerous.

And with that, I think the future-proofing of the ds106 infrastructure and trying to ensure there remains some link integrity is in pretty good shape. I’ll do another pass this weekend, and then terminate the shared hosting instance, and commit to the cloud!

________________________________________________

*While I was at it I took a flat-file back-up of all of ds106.us and got a database and file dump (as well as a full cPanel backup file) that currently live in DropBox. So, this is a note-to-self that I do have a full snapshot of the site from June 2020 when I go searching for it in the future.

Migrating ds106 to the Reclaim Cloud

If the migration of bavatuesdays was a relatively simple move to Reclaim Cloud, doing the same for ds106 was anything but. Five days after starting the move I finally was successful, but not before a visceral sense of anguish consumed my entire week. Obsession is not healthy, and at least half the pain was my own damn fault. If I would have taken the time to read Mika Epstein’s 2012 meticulous post about moving a pre-3.5 version of WordPress Multisite from blogs.dir to uploads/sites in its entirety, none of this would have ever happened.

I started the migration on Tuesday of last week, and I got everything over pretty cleanly on the first try. At first glance everything was pretty much working so I was thrilled. I was even confident enough to point DNS away from the low-tenant shared hosting server it had been residing on.*

The question might be asked, why move the ds106 sites to Reclaim Cloud at all?  First off, I thought it would be a good test for seeing how the new environment handles a WordPress Cluster that is running multisite with subdomains. What’s more, I was interested in finding out during our Reclaim Cloud beta exactly how many resources are consumed and how often the site needs to scale to meet resource demands. Not only to do a little stress-testing on our one-click WordPress Cluster, but also try and get insight into costs and pricing. All that said, Tim did warn me that I was diving into the deep end of the cloud given the number of moving parts ds106 has, but when have I ever listened to reason?

Like I said, everything seemed smooth at first. All pages and images on ds106.us were loading as expected, I was just having issues getting local images to load on subdomain sites like http://assignments.ds106.us or http://tdc.ds106.us. I figured this would be an easy fix, and started playing with the NGINX configuration given from experience I knew this was most likely a WordPress Multisite re-direct issue. WordPress Multisite was merged into WordPress core in version 3.0, when this happened older WordPress Multi-user instances (like ds106) were working off legacy code, one of the biggest differences is where images were uploaded and how they were masked in the URL. In WPMU images for sub sites were uploaded to wp-content/blogs.dir/siteID/files, and using .htaccess rules were re-written to show the URL as http://ds106.us/files/image1.jpg. After WordPress 3.0 was released, all new WordPress Multisite instances (no longer was it called multi-user) would be uploaded to wp-content/uploads/sites/siteID, and they they no longer mask, effectively including the entire URL, namely http://ds106.us/wp-content/uploads/sites/siteID/image1.jpg.

So, that’s a little history to explain why I assumed it was an issue with the .htaccess rules masking the subdomain URLs. In fact, in the end I was right about that part at least. But given ds106.us was moving from an apache server-based stack to one running NGINX, I made another assumption that the issue was with the NGINX redirects—and that’s where I was wrong and lost a ton of time. On the bright side, I learned more than a little about the nginx.conf file, and let me take a moment to document some of that below for ds106 infrastructure posterity. So, the .htaccess file is what Apache uses to control re-directs, and the those look something like this for a WordPress Multisite instance before 3.4.2:

# BEGIN WordPress
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# uploaded files
RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule . index.php [L]
# END WordPress

In WordPress 3.5 the ms-files.php function was deprecated, and this was my entire problem, or so I believe. Here is a copy of the .htaccess file for WordPress Multisite after version 3.5:

RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# add a trailing slash to /wp-admin
RewriteRule ^wp-admin$ wp-admin/ [R=301,L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]

No reference to ms-files.php at all. But (here is where I got confused cause I do not have the same comfort level with nginx.conf as I do .htaccess) in the nginx.conf file on the Reclaim Cloud server there is a separate subdom.conf file that deals with these re-directs like so:

    #WPMU Files
        location ~ ^/files/(.*)$ {
                try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
                access_log off; log_not_found off;      expires max;
        }

    #WPMU x-sendfile to avoid php readfile()
    location ^~ /blogs.dir {
        internal;
        alias /var/www/example.com/htdocs/wp-content/blogs.dir;
        access_log off;     log_not_found off;      expires max;
    }

    #add some rules for static content expiry-headers here
}

[See more on nginx.conf files for WordPress here).]

Notice the reference to WPMU in the comments, not WPMS. But I checked the ds106.us instance on the apache server it was being migrated from and this line existed:

RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]

So ds106 was still trying to use ms-files.php even though it was deprecated long ago. While this is very much a legacy issue that comes with having a relatively complex site online for over 10 years, I’m still stumped as to why the domain masking and redirects for images on the subdomain sites worked cleanly on the Apache server but broke on the NGINX server (any insight there would be greatly appreciated). Regardless, they did and everything I tried to do to fix it (and I tried pretty much everything) was to no avail.

I hit this post on Stack Exchange that was exactly my problem fairly early on in my searches, but avoided doing it right away given I figured moving all uploads for subdomain  sites out of blog.dir into uploads/sites would be a last resort. But alas 3 days and 4 separate migrations of ds106 later—I finally capitulated and realized that Mika Epstein’s brilliant guide was the only solution I could find to get this site moved and working. On the bright side, this change should help future-proof ds106.us for the next 10 years 🙂

I really don’t have much to add to Mika’s post, but I will make note of some of the specific settings and commands I used along the way as a reminder when in another 10 years I forget I even did this.

I’ll use Martha Burtis‘s May 2011 ds106 course (SiteID 3) as an example of a subdomain migrated to capture the commands.

The following command moves the files for site with ID 3 (may11.ds106.us) into its new location at uploads/sites/3

mv ~/wp-content/blogs.dir/3 ~/wp-content/uploads/sites/

This command takes all the year and month-based files in 3/files/* and moves them up one level, effectively getting rid of the files directory level:

mv ~/wp-content/uploads/sites/3/files/* ~/wp-content/uploads/sites/3

At this point we use the WP-CLI tool do a find and replace of the database for all URLs referring to may11.ds106.us/files and replace them with may11.ds106.us/wp-content/uploads/sites/3:

wp --network --allow-root search-replace 'may11.ds106.us/files' 'may11.ds106.us/wp-content/uploads/sites/3'

The you do this 8 or 9 more times for each subdomain, this would obviously be very , very painful and need to be scripted for a much bigger site with many 10s, 100s or 1000s of sub sites.†

To move over all the files and the database I had to run two commands. The first was to sync files with the new server:

rsync -avz root@ds106.oldserver.com:/home/ds106/public_html/ /data/ROOT/

Rsync is is the best command ever and moves GBs and GBS of data in minutes.

The second command was importing the database, which is 1.5 GBs! I exported the database locally, then zipped it up and uploaded it to the database cluster container and then unzipped it and ran the database import tool, which takes a bit of time:

mysql -u user_name -p database_name < SQL_file_to_import

After that, I had to turn off ms_files_rewriting, the culprit behind all my issues. That command was provided in Mika’s post linked to above:

INSERT INTO `my_database`.`wp_sitemeta` (`meta_id`, `site_id`, `meta_key`, `meta_value`) VALUES (NULL, '1', 'ms_files_rewriting', '0');

You also need to add the following line to wp-config.php:

define( 'UPLOADBLOGSDIR', 'wp-content/uploads/sites' );

The only other thing I did for safe-keeping was create a quick plugin function based on Mika’s stupid_ms_files_rewriting to force the re-writing for any stragglers to the new URL:

function stupid_ms_files_rewriting() {
$url = '/wp-content/uploads/sites/' . get_current_blog_id();
define( 'BLOGUPLOADDIR', $url );
}
add_action('init','stupid_ms_files_rewriting');

I put that in mu-plugins, and the migrated ds106.us multisite install worked! There was some elation and relief this past Saturday when it finally worked. I was struggle-bussing all week as a result of this failed migration, but I am happy to say the Reclaim Cloud environment was not the issue, rather legacy WordPress file re-writes seemed to be the root cause of my problems.

I did have to also update some hardcoded image URLs in the assignment bank theme , but that was easy. The only thing left to do now is fix the ds106 MediaWIki instance and write that to HTML so I can preserve some of the early syllabi and other assorted resources. It was a bit of a beast, but I am very happy to report that ds106 is now on the Reclaim Cloud and receiving all the resources it deserves on-demand 🙂


*VIP1 was the most recent in a series of temporary homes given how resource intensive the site can be given the syndication hub it has become.

†I did all these changes on the Apache live site before moving them over (take a database back-up if you are living on the edge like me), and then used the following tool to link all the

bava in the cloud with clusters

http://reclaim.cloud

Last weekend I took a small step for the bava, but potentially a huge step for Reclaim Hosting. This modest blog was migrated (once again!) into a containerized stack in the cloud in ways we could only dream about 7 years ago. There is more to say about this, but I’m not sure now is the right time given there is terror in the streets of the US of A and the fascist-in-charge is declaring warfare on the people. What fresh hell is this?!  But, that said, I’ve been hiding from American for years now, and quality lockdown time in Italy can make all the difference. Nonethless, I find myself oscillating wildly between unfettered excitement about the possibilities of Reclaim and fear and loathing of our geo-political moment. As all the cool technologists say, I can’t go on, I’ll go on….

For anyone following along with my migrations since January, there have been 4 total. I migrated from one of Reclaim Hosting’s shared hosting server’s in early January because the bava was becoming an increasingly unpredictable neighbor. The HOA stepped in, it wasn’t pretty. So, it meant new digs for the bava, and I blogged my moved from cPanel to a Digital Ocean droplet that I spun up. I installed a LEMP environment, setup email, firewall, etc. I started with a fresh Centos 7.6 sever and set it up as a means to get more comfortable with my inner-sysadmin. It went pretty well, and costs me about $30 per month with weekly backups. But while doing a migration I discovered a container-based WordPress hosting service called Kinsta which piqued my interest, so I tried that out. But it started to get pricey, so I jumped back to Digital Ocean in April (that’s the third move) thinking that was my last.*

Imag

But a couple of weeks later I was strongly considering a fourth move to test out a new platform we’re working on, Reclaim Cloud, that would provide our community a virtualized container environment to fill a long-standing gap in our offerings to host a wide array of applications run in environments other than LAMP. I started with a quick migration of my test Ghost instance using the one-click installer for Ghost (yep, that’s right, a one-click installer for Ghost). After that it was a single export/import of content and copying over of some image files. As you can see from the screenshot above, while this Ghost was a one-click install, the server stack it runs on is made visible. The site has a load balancer, an NGINX application server, and a database which we can then scale or migrate to different data centers around the world.

In fact, geo-location at Reclaim for cloud-based apps will soon be a drop-down option. You can see the UK flag at the top of this one as hope springs eternal London will always be trEU. This was dead simple, especially given I was previously hosting my Ghost instance on a cPanel account which was non-trival to setup. So, feeling confident after just a few minutes on a Saturday, I spent last Sunday taking on the fourth (and hopefully final) migration of this blog to the Reclaim Cloud! I’ve become an old hand at this by now, so grabbing a database dump was dead simple, but I did run into an issue with using the rsync command to move files to the new server, but I’ll get to that shortly.

First, I had to setup a WordPress cluster that has a NGINX load balancer, 2 NGINX application servers, a Gallera cluster of 3 MariaDB databases, and a NFS file system. Each of these are within their own containers, pretty cool, no? But don’t be fooled, I didn’t set this up manually—though one could with some dragging and dropping—the Reclaim Cloud has a one-click WordPress Custer install that allows me to spin-up a high-performance WordPress instance, all of which are different layers of a containerized stack:

And like having my own VPS at Digital Ocean, I have SSH and SFTP access to each and every container (or node) in the stack.

In fact, the interface also allows access and the ability to edit files right from the web interface—a kind of cloud-based version of the File Manager in cPanel.

I needed SSH access to rsync files from Digital Ocean, but that is where I ran into my only real hiccup. My Digital Ocean server was refusing the connection because it was defaulting to a SSH key, and given the key on the Reclaim Cloud stack was not what it was looking for, I started to get confused. SSH keys can make my head spin, Tim explained it like this:

I never liked that ssh keys were both called keys. Better analogy would be “private key and public door”. You put your door on both servers but your laptop has the private key to both. But the key on your laptop is not on either server, they both only have the public door uploaded. On your laptop at ~/.ssh you have two files id_rsa and id_rsa.pub. The first is the key. Any computer including a server that needs to communicate over ssh without a password would need the key. And your old server was refusing password authentication and requiring a key.

That’s why Timmy rules, after that I enabled the prompting of an SSH server password when syncing between the Cloud and Digital Ocean using this guide. After that hiccup, I was in business. The last piece was mapping the domain bavatuesdays.com:

And issuing an SSL certificate through Let’s Encrypt:

It’s worth noting here that I am using Cloudflare for DNS, and once I pointed bavatuesdays.com to the new IP address and cleared the local hosts file on my laptop the site resolved cleanly with https and was showing secure. Mission accomplished. I was a cloud professional, I can do anything. THE BAVA REIGNS! I RULE!  Ya know, the usual crap from me.

But that was all before I was terribly humbled by trying to migrate ds106.us the following day. That was a 5-day ordeal that I will blog about directly, but until then—let me enjoy the triumph of a new, clustered day of seamless expansion of resources for my blog whenever resources run high.

I woke up to this email which is what the clustering is all about, I have bavatuesdays set to add another NGINX application server to the mix when resource on the existing two go over 50%. That’s the elasticity of the Cloud that got lost when anything not on your local machine was referred to as the cloud. A seamlessly scaling environment to meet the resource demands, but only costing you what you use like a utility was always the promise that most “cloud” VPS providers could not live up to. Once the resource spike was over I got an email telling me the additional NGINX node was spun down. I am digging this feature of the bava’s new home; I can sleep tight knowing the server Gremlins will be held at bay my the elastic bands of virtualized hardware.


*I worked out the costs of Digital Hosting vs Kinsta, and that was a big reason to leave Kinsta given the bava was running quite well in their environment.

N.B:  While writing this Tim was working on his own post and he found some dead image links on the bava as a result of my various moves, and with the following command I fixed a few of them 🙂
wp search-replace 'https://bavatuesdays.com/wp-content/uploads' 'https://bavatuesdays.com/wp-content/uploads'
….
Made 8865 replacements. Please remember to flush your persistent object cache with `wp cache flush`.