Reclaim Cloud: Accessing Databases

Previously on Reclaim Cloud Learning; I talked through working with SSL certificates on the Cloud and Mattermost training, and now I’m going to be talking about working through databases to round out the Reclaim Cloud month. This one, in particular, surrounds accessing databases (like WordPress, or Mattermost) without the confirmation emails from Jelastic. I’ll walk you through getting access to the database node from your Cloud Dashboard without your database credentials. Reclaim Hosting uses this method on our support, so we don’t have to reach back to the user to see if they’ve saved the credential emails. It helps us save time going back and forth and getting to the root of an issue quicker.

After you install an application to your environment, say through WordPress or Mattermost, you’ll receive a few confirmation emails with passwords to various portions of the site. You’ll want to save these credentials but delete the email. You can use this method as well to access your node through Reclaim Cloud.

Accessing the Database Node

First, however, the biggest step is to access the database node within your browser. You can access this a number of ways, first by going to the URL and using the specific port number. So for instance WordPress you can use

You can also access it through a node if it is a separate instance within your environment like, you can use this when working with PostgreSQL too for Mattermost.

Once you’ve loaded the URL you should see a login screen:

Locating Credentials

If you don’t have the credentials that were sent via email after the Jelastic, you can also locate the credentials within the site’s configuration files.

You’ll want to navigate to the file management system within your Cloud Dashboard, but you can also use WebSSH if you’d like. This will walk you through the file management system.


For WordPress, you’ll want to work through your wp-config.php file within /var/www/webroot/ROOT/wp-config.php

Connection settings should look like this:

// ** Database settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define( ‘DB_NAME’, ‘wp_7698727’ );

/** Database username */
define( ‘DB_USER’, ‘jelastic-6599005’ );

/** Database password */
define( ‘DB_PASSWORD’, ‘vsecurepassword’ );

/** Database hostname */
define( ‘DB_HOST’, ‘’ );

/** Database charset to use in creating database tables. */
define( ‘DB_CHARSET’, ‘utf8’ );

/** The database collate type. Don’t change this if in doubt. */
define( ‘DB_COLLATE’, ” );

You’ll use the DB_USER and the DB_PASSWORD to access cPanel from there to access PHPMyAdmin.


Mattermost is just a bit different! Since this is using PostgreSQL, the location varies on the file. You’ll want to navigate to the /root/mattermost/congfig/config.jsonfile.

The username, password, and database name are located on the DataSource line under SqlSettings. Should look like:

“DataSource”: “postgres://webadmin:nky9FicDb4@sqldb:5432/mattermost?sslmode=disable\u0026connect_timeout=10”,


Once you’re logged in you’re good to go! You can make changes to the database like you would through the user interface, like changing a siteURL or homepageURL or grabbing an export of the database.

Featured Image: Photo by Henry Dick on Unsplash

Changing Your WordPress Theme from the Database

I wrote about changing your SiteURL and home page URL within your database if you’re working with a blank screen in WordPress, but I wanted to document what it looks like to change your WordPress theme from the database as well. I was working with users to set their site to the default WordPress themes if there was ever an error with the site. Say the theme was outdated and broke after an upgrade.

If your theme is wonky, you may see several different errors, a Critical Error in WordPress, an HTTP 500 Error, weird CSS on pages. If you have a Critical Error on your page or the HTTP 500 Error, you can also diagnose that within the error_log file within your cPanel. You may also lose access to your wp-admin dashboard to change the theme as well.

To start, you want to locate your database. This can be found within the wp-config.php file or within your Installatron instance for the site:


You’ll then want to navigate to PHPMyAdmin (I’m using cPanel in this instance).


Within the database, you’ll want to move to the wp_options table and locate the template and stylesheet. They typically are within the same lines as each other.

Next, you’ll want to locate the theme you’d like to change to. This is found in the wp-content/themes folder of the install. Any of the WordPress themes like twentytwentyone or twentytwentytwo will work. WordPress will default to these themes when working on an error.


Back in PHPMyAdmin, you can double click on the previous theme name and change it to the new theme.

Click out of the box and do the same for the template/stylesheet rows. Once changed your site should start to load again and you can begin to troubleshoot from there!

WordPress Multi-Region on Reclaim Cloud

First there was WPMu, then WPMS, and now WPMR!

Image of WordPress cluster diagram

Read more about the Multi-Region WordPress setup on Jelastic’s blog (click image)

WordPress Multi-Region is more a hosting than application specific feature, and to be clear this functionality is possible for applications beyond WordPress. But Jelastic, our Cloud Provider for Reclaim Cloud, has created a one-click application for installing a multi-region WordPress cluster that can replicate across various data centers in real-time. There are a few elements of this that are exciting as hosting providers:

  • With a one-click installer it’s easy to spin-up a complex WordPress infrastructure across numerous regions
  • It has the ability to route traffic so folks get less latency being able to access the instance closest to them
  • It bakes in fail over so that if one server in one region goes down the traffic is immediately redirected to another available datacenter to avoid downtime

These are all good reasons, but the last may be the most exciting because sites go down. Data centers catch fire, DDoS attacks happen, and servers will crash; it’s not a matter of if, only when. So, as more and more edtech infrastructure has become mission critical there needs to be options to route around that painful reality, and failover is just that: it replicates a single server setup across various data centers across various regions (US-West, Canada, UK, etc.) to ensure there isn’t one point of failure for a enterprise-level service. That’s pretty exciting given this is something we’ve been dreaming about at Reclaim Hosting for a while, and given we manage quite a few large WordPress instances, this could be an immediate options for folks that want to ensure uptime.

Image of Jelastic WPMR installer

The dialogue for the 1-click WordPress Multi-Region installer on in Reclaim Cloud’s marketplace

So, that’s the logic behind WordPress Multi-Region clusters, and while in Nashville for the Reclaim Hosting team retreat Tim started playing with this setup to test fail over. It worked in theory while we set it up, and then again in practice last week when our UK Cloud server had issues in the early morning. That reminded me that I was planning to play around with a WPMR setup for this modest standalone WP bava blog—cause the bava should never, ever go down … ever.  After that, I’ll see if I can make ds106 a multi-region setup over the winter break to get a sense of how it works with a fairly intense WPMS instance. So everything hereafter will be jotting down my progress over the last two days.

Diagram of an Maria DB Asynchronous Primary/Replica setup

I started with spinning up a multi-region cluster to host bavatuesdays. It was a 3-region cluster (US-East, US-West, and UK) and after figuring out permissions to rsync files across environments in Reclaim Cloud (it was harder than it should’ve been, thanks for the assist Chris Blankenship!) the migration was fairly straight forward. The Multi-Region setup across 3 regions has one primary cluster and two secondary clusters, and you rync the files to the primary application environment as well as import the database to that environment. Soon after that it syncs with the secondary environments, and like magic the replica clusters have all the files and database settings, posts, comments, etc., imported to the primary cluster. The replication happens in less than 60 seconds, so it might say asynchronous, but it ‘s all but immediate for my purposes.

image of bavatuesdays blog running on bavafail-1 .us cluster

bavatuesdays blog running on cluster

I did get running in a WPMR setup for several hours yesterday while experimenting, but had to revert to the stand-alone instance given I ran into an issue creating new posts that I’m still investigating. But as you can see above the blog is running on the domain, and there was another instance at, and a third at You can see from the URLs they are in different regions, US (East coast), WC (US West Coast), and the UK. These all worked perfectly, and the way to have them all point to was to add the public IP from the load balancer for each of the different regional clusters as an A record in your DNS zone editor.

Example from Jelastic's blog about adding A record for each WPMR cluster public IP address in Cloudflare

Example from Jelastic’s blog about adding A record for each WPMR cluster public IP address in Cloudflare

Reclaim Cloud provisions the SSL certificates, and after clearing the cluster’s cache the 3 sites were loading as one, with failover and regional traffic routing working well. It was pretty awesome, but there was one small issue, I could not create new posts, which is kind of a deal breaker for a blog. So I had to revert to the old server environment until I figured that issue out.* I was using the failover and routing baked into Jelastic’s setup seamlessly, but wanted to test out Cloudflare’s load balancing as well, but I’ll save those DNS explorations for another post. That said, Jelastic lays out the possibilities in their post on DNS load balancing for WordPress clusters quite well.

After setting up the A records and issuing SSL certs the bava was beaming across 3 regions. And when I turned one of the three regional clusters off, the site stayed online—so failover was working! The one issue that was also the case when Tim tested in Nashville is that when the Primary cluster goes down the secondary clusters are supposed to let you write to them. In other words, the WP authoring features accessed at /wp-admin should only work on the Primary cluster by default, but if it were to go down one of the other two secondary clusters should allow you to write.  This would not only keep the site online, but also allow posting to continue without issue, all of which should then be synced seamlessly back to the primary cluster once it comes back online. I was not able to get this functionality to work. After stopping the primary cluster, the secondary clusters would throw 500 internal server errors when trying to access /wp-admin -so that is another issue to figure out.

I have since spun down the bavafail 3-region test instance after hosing the application servers trying to downgrade PHP from 8.0.10 to 7.4.25 to test out a bad theory, so the first attempt of operation bavafailover with WPMR is dead on the operating room table. Although hope springs eternal at the bava, so I have plans to resuscitate that WPMR setup given I believe it’s a permissions issue—which means I’ll be bothering Chris again.

Image of failover test site failover test site

In the interim, however, I’ve spun up a two-region WPMR setup using the domain as a way to ensure adding new posts works on a clean instance (it does), and also to see if you can access the secondary clusters to write to the database when the primary is down (you can’t), so there is still definitely more work to do on this, but it is really exciting that we are just a couple of issues away from offering enterprise-level traffic routing and fail over for folks that need it. Reclaim Cloud is the platform that just keeps on giving in terms of next-level hosting options, and I love it.


*I was running into the same critical error that folks mention in this forum post, but after downgrading PHP versions from 8.0.10 to 7.4.25 on the WPMR cluster everything broke. I then tested PHP 8.0.10 on my LEMP environment for bavatuesdays (not a WPMR setup) and that worked fine. So not sure if it is specific to the WPMR setup in Jelastic, which uses LiteSpeed whereas my current blog uses Nginx, but this is something I am going to have to revisit shortly.

Open Source FTW or, a Small Anecdote of a WPMS LTI Integration Plugin

Back at Domains 2019 Andy Millington came all the way from the University of Edinburgh to Durham, North Carolina to share the work of his team to create an LTI  that integrates WordPress Multisite with Moodle. This is a project Anne-Marie Scott wrote about extensively, and I can think of few more eloquent and ardent supporters of open source in higher ed, so in many ways this post is for her–big fan!

I’ll be honest, LTI is not necessarily the sexiest edtech acronym I’ve used on this blog. In fact, for many it’s a more restrictive API that is designated for the worst of teaching tools: the LMS, or VLE, or what have you.* That said, Jon Udell made a pretty compelling argument in defense of the LTI (although I will not forgive him his LMS love) which is very much inline with his thinking through light-weight system integrations for decades now. What’s more, companies like Hypothesis and Lumen Learning have listened to the Dead Moocmen, and they know the LMS is here to stay, and it will never die. So LTI integrations into learning management systems of all kinds is a key part of their success, and while I find the continued dependence on the LMS sad and pathetic, I do understand the need for them. Such are the compromises of an aging edtech.

But if I can pull myself out of the depression this line of thought plunges me into, one silver lining is open source code that makes these LTI integrations more broadly applicable and freely re-usable. And here begins my quick anecdote that I hope Andy and Anne-Marie can appreciate. In early December I was on a call with a university that has a legacy WordPress Multisite that has been around since 2009 and has 17,000+ sites.† What’s more, it’s integrated with their LMS, which in this case is not Moodle but Sakai, and in order for them to offload the hosting they need to re-work that integration. They asked us if we do development work, which is a hard no. We have folks we can recommend, but we realized early on that development is not our game; it’s a totally different skillset and long-term maintenance is always more work than one could ever imagine. That said, during the meeting I believe Tim recommended they take a look at the code on Github for the LTI plugin developed at Edinburgh before going the often expensive and time-consuming custom development route.

When we met again right before the holidays one of the agenda items was regarding custom development for LTI integration for their WPMS into Sakai, which Lauren and I were sure was going to be a deal breaker. So as we got to that bullet point the developer said this was no longer a concern, they looked at the LTI plugin from Edinburgh on Github and with some slight customizations for Sakai reported it worked brilliantly. In fact, it was even better than what they had been using previously. YEAH!

I’ll be sure to follow-up and see if that  modification can be shared somewhere for other folks using Sakai and wanting WordPress LTI integration. But in the interim it just seemed important to tell the story because those universities like University of Edinburgh that are leading by giving back, and putting the talent they have locally to work for a much broader global community is a facet of the power of open that made me fall in love with that whole concept way back in 2004 or 2005. Avanti!


*I still hate the LMS as much as I ever did, and dealing with it tangentially last semester as Antonella started teaching again re-surfaced all the old wounds and that deep-seated loathing of a true tool of teaching oppression.

†As it turns out, this WordPress Multisite instance was a result of a visit and consultation I made in that same year. It has been amazing to me how many sites I helped folks get up and running a decade ago are now are hosted with Reclaim, it’s truly a long game/con I have been running all these years 🙂

Holy Clustered Cloudlets, bava!

The last month needed to be a deep dive into the Reclaim Cloud, and I think I have held up my end. I migrated quite a few old sites, namely bavaghost, this blog (a couple of times), ds106 (a couple of times), and I also spun up some new applications, such as Jitsi, Discourse, Azuracast, and Minio. It has been a really rewarding, if at times frustrating, month. I am learning a ton about containers, Docker, and power of virtualized environments. I’ve been able to experiment with different database types and environments for bavatuesdays and ds106 simply by cloning an entire stack and testing the changes, this is illustrated nicely by my last post about switching database types on ds106 and down-sizing this blog to a non-clustered WordPress instance. I followed up on that last night with moving ds106 to a non-clustered WordPress instance as well, and like this blog it’s running cleanly on a fraction of the CPU resources.

The Reclaim Cloud measures resources in a unit known as a cloudlet, which is a measure of CPU usage or, 128 MB per cloudlet. So 8 cloudlets is 1 GB of CPU, 16 cloudlets 2 GB, etc. Part of my experimentation over the last month was to explore WordPress clusters given this environment would be ideal for heavily trafficked WordPress sites. And while ds106 and bavatuesdays have a bit of traffic, they are not “high traffic” sites so reducing the regular cloudlet usage by roughly 60% translates into significant ongoing savings monthly.

While I can reserve up to 32 cloudlets, or 4 GB of CPU for my WordPress site to scale up at any time, I will only be charged for how much I use, which is on average 10 cloudlets. With the clustered setup I was being charged minimum25 cloudlets given it was powering a load balancer, 2 NGINX apps, 3 MySQL databases, a separate storage container, etc.

With the non-clustered environment base resource usage (cloudlets) is cut significantly because there are far fewer containers.This containerized LEMP environment with the MySQL database, NGINX app, and storage all wrapped into one is far simpler than the cluster, and while it won’t scale as seamlessly as the architecture in the first image, chances are it won’t need to. And even if it does get a spike, the container can scale another 22 cloudlets, or almost 3 GBs of CPU resources, before running into any limits.

So, as June comes to a close I have moved this blog around a bunch over the last 6 months: from cPanel-based shared hosting to Digital Ocean to Kinsta back to Digital Ocean and now to Reclaim Cloud. Digital Ocean was costing about $25 per month for a 4 GB server and Spaces instance, which is quite reasonable. Kinsta would have been $80 a month for container-based WordPress hosting, which was a bit rich for my blood. Running bavatuesdays on the Reclaim Cloud will cost roughly the same as Digital Ocean for a server that can scale up to 4 GB (although in practice it is only using 1-2 GBs of resource at most). And while there is no possible way we can pretend to compete with Digital Ocean on server costs, if we are able to keep pricing within the same ballpark that would be amazing!

Migrating ds106 to the Reclaim Cloud

If the migration of bavatuesdays was a relatively simple move to Reclaim Cloud, doing the same for ds106 was anything but. Five days after starting the move I finally was successful, but not before a visceral sense of anguish consumed my entire week. Obsession is not healthy, and at least half the pain was my own damn fault. If I would have taken the time to read Mika Epstein’s 2012 meticulous post about moving a pre-3.5 version of WordPress Multisite from blogs.dir to uploads/sites in its entirety, none of this would have ever happened.

I started the migration on Tuesday of last week, and I got everything over pretty cleanly on the first try. At first glance everything was pretty much working so I was thrilled. I was even confident enough to point DNS away from the low-tenant shared hosting server it had been residing on.*

The question might be asked, why move the ds106 sites to Reclaim Cloud at all?  First off, I thought it would be a good test for seeing how the new environment handles a WordPress Cluster that is running multisite with subdomains. What’s more, I was interested in finding out during our Reclaim Cloud beta exactly how many resources are consumed and how often the site needs to scale to meet resource demands. Not only to do a little stress-testing on our one-click WordPress Cluster, but also try and get insight into costs and pricing. All that said, Tim did warn me that I was diving into the deep end of the cloud given the number of moving parts ds106 has, but when have I ever listened to reason?

Like I said, everything seemed smooth at first. All pages and images on were loading as expected, I was just having issues getting local images to load on subdomain sites like or I figured this would be an easy fix, and started playing with the NGINX configuration given from experience I knew this was most likely a WordPress Multisite re-direct issue. WordPress Multisite was merged into WordPress core in version 3.0, when this happened older WordPress Multi-user instances (like ds106) were working off legacy code, one of the biggest differences is where images were uploaded and how they were masked in the URL. In WPMU images for sub sites were uploaded to wp-content/blogs.dir/siteID/files, and using .htaccess rules were re-written to show the URL as After WordPress 3.0 was released, all new WordPress Multisite instances (no longer was it called multi-user) would be uploaded to wp-content/uploads/sites/siteID, and they they no longer mask, effectively including the entire URL, namely

So, that’s a little history to explain why I assumed it was an issue with the .htaccess rules masking the subdomain URLs. In fact, in the end I was right about that part at least. But given was moving from an apache server-based stack to one running NGINX, I made another assumption that the issue was with the NGINX redirects—and that’s where I was wrong and lost a ton of time. On the bright side, I learned more than a little about the nginx.conf file, and let me take a moment to document some of that below for ds106 infrastructure posterity. So, the .htaccess file is what Apache uses to control re-directs, and the those look something like this for a WordPress Multisite instance before 3.4.2:

# BEGIN WordPress
RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# uploaded files
RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule . index.php [L]
# END WordPress

In WordPress 3.5 the ms-files.php function was deprecated, and this was my entire problem, or so I believe. Here is a copy of the .htaccess file for WordPress Multisite after version 3.5:

RewriteEngine On
RewriteBase /
RewriteRule ^index\.php$ - [L]

# add a trailing slash to /wp-admin
RewriteRule ^wp-admin$ wp-admin/ [R=301,L]

RewriteCond %{REQUEST_FILENAME} -f [OR]
RewriteCond %{REQUEST_FILENAME} -d
RewriteRule ^ - [L]
RewriteRule ^(wp-(content|admin|includes).*) $1 [L]
RewriteRule ^(.*\.php)$ $1 [L]
RewriteRule . index.php [L]

No reference to ms-files.php at all. But (here is where I got confused cause I do not have the same comfort level with nginx.conf as I do .htaccess) in the nginx.conf file on the Reclaim Cloud server there is a separate subdom.conf file that deals with these re-directs like so:

    #WPMU Files
        location ~ ^/files/(.*)$ {
                try_files /wp-content/blogs.dir/$blogid/$uri /wp-includes/ms-files.php?file=$1 ;
                access_log off; log_not_found off;      expires max;

    #WPMU x-sendfile to avoid php readfile()
    location ^~ /blogs.dir {
        alias /var/www/;
        access_log off;     log_not_found off;      expires max;

    #add some rules for static content expiry-headers here

[See more on nginx.conf files for WordPress here).]

Notice the reference to WPMU in the comments, not WPMS. But I checked the instance on the apache server it was being migrated from and this line existed:

RewriteRule ^files/(.+) wp-includes/ms-files.php?file=$1 [L]

So ds106 was still trying to use ms-files.php even though it was deprecated long ago. While this is very much a legacy issue that comes with having a relatively complex site online for over 10 years, I’m still stumped as to why the domain masking and redirects for images on the subdomain sites worked cleanly on the Apache server but broke on the NGINX server (any insight there would be greatly appreciated). Regardless, they did and everything I tried to do to fix it (and I tried pretty much everything) was to no avail.

I hit this post on Stack Exchange that was exactly my problem fairly early on in my searches, but avoided doing it right away given I figured moving all uploads for subdomain  sites out of blog.dir into uploads/sites would be a last resort. But alas 3 days and 4 separate migrations of ds106 later—I finally capitulated and realized that Mika Epstein’s brilliant guide was the only solution I could find to get this site moved and working. On the bright side, this change should help future-proof for the next 10 years 🙂

I really don’t have much to add to Mika’s post, but I will make note of some of the specific settings and commands I used along the way as a reminder when in another 10 years I forget I even did this.

I’ll use Martha Burtis‘s May 2011 ds106 course (SiteID 3) as an example of a subdomain migrated to capture the commands.

The following command moves the files for site with ID 3 ( into its new location at uploads/sites/3

mv ~/wp-content/blogs.dir/3 ~/wp-content/uploads/sites/

This command takes all the year and month-based files in 3/files/* and moves them up one level, effectively getting rid of the files directory level:

mv ~/wp-content/uploads/sites/3/files/* ~/wp-content/uploads/sites/3

At this point we use the WP-CLI tool do a find and replace of the database for all URLs referring to and replace them with

wp --network --allow-root search-replace '' ''

The you do this 8 or 9 more times for each subdomain, this would obviously be very , very painful and need to be scripted for a much bigger site with many 10s, 100s or 1000s of sub sites.†

To move over all the files and the database I had to run two commands. The first was to sync files with the new server:

rsync -avz /data/ROOT/

Rsync is is the best command ever and moves GBs and GBS of data in minutes.

The second command was importing the database, which is 1.5 GBs! I exported the database locally, then zipped it up and uploaded it to the database cluster container and then unzipped it and ran the database import tool, which takes a bit of time:

mysql -u user_name -p database_name < SQL_file_to_import

After that, I had to turn off ms_files_rewriting, the culprit behind all my issues. That command was provided in Mika’s post linked to above:

INSERT INTO `my_database`.`wp_sitemeta` (`meta_id`, `site_id`, `meta_key`, `meta_value`) VALUES (NULL, '1', 'ms_files_rewriting', '0');

You also need to add the following line to wp-config.php:

define( 'UPLOADBLOGSDIR', 'wp-content/uploads/sites' );

The only other thing I did for safe-keeping was create a quick plugin function based on Mika’s stupid_ms_files_rewriting to force the re-writing for any stragglers to the new URL:

function stupid_ms_files_rewriting() {
$url = '/wp-content/uploads/sites/' . get_current_blog_id();
define( 'BLOGUPLOADDIR', $url );

I put that in mu-plugins, and the migrated multisite install worked! There was some elation and relief this past Saturday when it finally worked. I was struggle-bussing all week as a result of this failed migration, but I am happy to say the Reclaim Cloud environment was not the issue, rather legacy WordPress file re-writes seemed to be the root cause of my problems.

I did have to also update some hardcoded image URLs in the assignment bank theme , but that was easy. The only thing left to do now is fix the ds106 MediaWIki instance and write that to HTML so I can preserve some of the early syllabi and other assorted resources. It was a bit of a beast, but I am very happy to report that ds106 is now on the Reclaim Cloud and receiving all the resources it deserves on-demand 🙂

*VIP1 was the most recent in a series of temporary homes given how resource intensive the site can be given the syndication hub it has become.

†I did all these changes on the Apache live site before moving them over (take a database back-up if you are living on the edge like me), and then used the following tool to link all the

The Ghost of bava

This is kind of a record keeping post, it turns out when you’ve been blogging for nearly 15 years posts can be useful to remind you of what you did years earlier that you presently have no recollection of. It’s my small battle against the ever-creeping memory loss that follows on the heels of balding and additional chins—blog against the dying of the light!

Anyway, I’m trying to keep on top of my various sites and recently I realized that as a result of extracting this blog out of my long-standing WordPress Multisite in 2018, followed by a recent move over to Digital Ocean this January a number of images in posts that were syndicated from bavatuesdays to sites like were breaking. The work of keeping link rot at bay is daunting, but we have the technology. I was able to login to UMW Blogs database and run the following SQL query:

UPDATE wp_13_posts SET post_content = replace(post_content, '', '')

That brought those images back, and it reminds me that I may need to do something similar for the site given I have a few hundred posts syndicated into that site that probably have broken images now. 

But the other site I discovered had broken images as a result of my various moves was the Ghost instance I’ve kept around since 2014. I initially started this site as a sandbox on AWS in order to get a Bitnami image of Ghost running, which was my first time playing with that space in earnest back in 2014. That period was when Tim and I were trying to convince UMW’s IT department to explore AWS in earnest. In fact, we would soon move UMW Blogs to AWS as a proof-of-concept but also to try and pave the way for hosting more through Cloud-based services like Digital Ocean, etc. 

It’s also the time when the idea of servers in the “Cloud” seemed amazing and the idea of new applications running on stacks other than LAMP became real for me. Ghost was one of those. It was the promise of a brave new world, a next-generation sandbox, which was around the time Tim setup container-based hosting for both Ghost and Discourse through Reclaim Hosting as a bit of an experiment. Both worked quite well and were extremely reliable, but there was not much demand and in terms of support it continued to rely too heavily on Tim for us to sustain it without a more robust container-based infrastructure. We discontinued both services a while back, and are finally shutting down those servers once and for all. And while we had hopes for Cloudron over the last several years, in the end that’s not a direction we’re planning on pursuing. Folks have many options for hosting applications like JupyterHub and the like, and the cost concerns of container-based hosting remains a big question mark—something I learned quickly when using Kinsta.

Part of what makes Reclaim so attractive is we can provide excellent support in tandem with an extremely affordable service. It’s a delicate balance to say the least, but we’ve remained lean, investment free, and as a result have been able to manage it adroitly. We are still convinced that for most folks a $30 per year hosting plan with a free domain will go a long way towards getting them much of what they need when it comes to a web presence. If we were to double or triple that cost by moving to a container-based infrastructure it would remove us from our core mission: provide affordable spaces for folks to explore and learn about the web.* What’s more, in light of the current uncertainties we all face we’re even more committed to keeping costs low and support dialed-in. 

Ghost in a Shell

So, I’m not sure why this record keeping post became a manifesto on affordability, but there you have it ? All this to say while we have been removing our Discourse forum application servers we also decided to use the occasion to migrate our Ghost instances that we’re currently hosting (which are only a very few) to shared hosting so that we can retire the server that was running them on top of Cloudron. So, Tim and I spent a morning last week going over his guide for setting up Ghost through our shared hosting on cPanel, and it still works.† The only change is now you need to use Node.js version 10+ for the latest version of Ghost.

He migrated his Ghost blog to our shared hosting, and I did the same for mine (which only has a few posts).  He has been blogging on Ghost for several years now and I have to say I like the software a lot. It’s clean and quite elegant, and their mission and transparency is a model! But if you don’t have the expertise to install it yourself (whether on cPanel or a VPS) hosting it through them comes at a bit of a cost, with plans starting at about $30 per month. That price-point is a non-starter for most folks starting out. What’s more, there’s little to no room to dig deeper into the various elements of web hosting afforded by cPanel for an entire year (including a domain) versus the same cost for just one month of hosting for only one site.

So, I have toyed with the idea of trying to move all my posts over to Ghost, but when I consider the cost as well as the fact it has no native way to deal with commenting cleanly, it  quickly becomes a non-starter. With over 14,000 comments on this blog, I can’t imagine they would be migrated to anything resembling a clean solution that would not result in just that much more link rot. I guess I am still WordPress #4life ?

*And while it remains something we are keenly interested in doing, we are not seeing it as an immediate path given the trade-off between investment costs and the idea a per-container costs for certain applications which would radically change our pricing model.

†He had to help me figure out some issues I ran into as a result of running the commands as root.

Bava moves to Kinsta, story at 11

It’s been surreal here in Northern Italy, and the last thing the world needs right now is another hot take on the Corona Virus or teaching online in the age of pandemics. My turn over the least 10 years has been to explore new (and old) web-based environments possible for teaching and learning, and frankly the syndicated, asynchronous and distributed learning environment sounds pretty good right about now. Throw in some radio, and it is near on perfect ?

But I profess and digress, but at least it’s not on Twitter. The point of this post is simply to chronicle my migration of this blog from Digital Ocean (DO) to Kinsta yesterday. I created the DO droplet back in January and documented the process (find the blog posts here, here, here, and here). I learn a ton from these projects and WordPress continues to be the tool I use and learn about the web through the lens of. I recognize the limitations therein, but that said I only have so much emotional labor to spare! So when I was doing a migration from Kinsta to Reclaim Hosting I became really intrigued by Kinsta’s model, to quickly re-iterate here they provide container-based WordPress instances, and their service is built on top of Google’s Cloud platform.

They provide what they call “premium” WordPress hosting, which comes at a price. At the lowest end of the spectrum it costs $30 per month, which is as much as a year’s hosting at Reclaim—and we even throw in a domain. But they aren’t really geared towards the same audience, they are positioned to serve folks how have a site that needs to scale resources seamlessly for both traffic spikes and quick growth. Like I said in my previous post, it reminds me of a dead-simple, elastic Amazon Web Services (AWS) EC-2 instance for those who don’t have the sysadmin chops but need to run a beefy, mission critical WordPress instance. But like AWS, resources come at a premium, but I’ll talk about that later on in this post.

For now let me focus on the migration and Kinsta’s stellar support. I actually tried to migrate the bava from my DO instance two days ago, but I ran into issues because my Kinsta container runs over port 51135, and I could not cleanly move a zipped up copy of my files between servers. The following was the command I used when logged into my DO server, but I kept getting connection errors. Below is a stripped version of the scp command I used

scp -P 51135 myusername@my.ip.address:/www/bavatuesdays/public /www/bava/

I jumped on chat support* and I was almost immediately answered by Ruby who told me there may be issues with my DO instance blocking port 51135, which turned out to be correct, I just was not smart enough to open it. Given the bava is almost 9 GB of files, SFTP is out of the questions given with my current upload speed it would take 12+ hours. Whereas a scp between servers takes literally minutes for a 9 GB zip file. I left things alone for the day as work at Reclaim started to gear up, but returned to it yesterday early with the idea of  actually moving the instance of bavatuesdays I had on Reclaim servers before migrating in January. This would have almost all the same files save anything uploaded after mid-January, which is an easy fix. I unblocked port 51135 on the old server and tried the scp command to the Kinsta container and it worked, 9 GB moved in 6 minutes. 

That was awesome, but when I tried unzipping the directory on Kinsta’s server I was getting disconnected from the server:

I jumped on the chat support, and Ruby once again bailed me out suggested I use the external Ip address for this rather than the internal given it is often more table. Boom, that worked. I was able to swap out the images I was missing since January, and my site was now on Kinsta. A few things I really appreciated was dead simple SSL cert and forcing of SSL through the tools panel:

After that, I tried upgrading to PHP 7.4, and that was dead simple too, all seemed to work, but the WordPress debugging tool showed me there was an issue with the Crayon Syntax Highlighter plugin for anything above PHP 7.2 (it was actually breaking any post with it embedded, which is annoying) so I reverted to 7.2 for now, but I should know better than to use plugins 4 years out of date. I am pointing my domain from my Reclaim cPanel, so no need for Kinsta’s DNS controls, but always interesting to see how they handle that:

Using Amazon Route 53, just like Reclaim, and I might have to add a domain to see how the controls look, and I do like the Gmail MX records radio button given that would, I imagine, pre-fill the records given they’re predictable, and be entirely out of the email game is a beautiful thing! 

Kinsta has built-in caching for sites (need to look more into the details behind that) and they also have a CDN tool, something I’ve never used on the bava, so I wanted to try that out to see if it speeds things up. Now, it is kinda of a joke to say that because speeding my site up means getting it to load under 4 seconds given I load images on heavy, and I never get a rating above D from Pingdom’s speed tester, but I am feeling the site is a bit snappier regardless ? 

So, I got caching, CDN loading, and the like. Now when I moved to Kinsta I was un-phased by there 20,000 unique hits limit for the $30 plan given I average about 100-200 daily hits on the bava according to Jetpack—I’m not as big in Japan as I once was ? But this morning when I checked the site recorded over 2200+ unique visits, even though Jetpack recorded 165. That’s a pretty big discrepancy.

What’s more I was transferring 2.5 GB of data in less than a day? Who knew?! At this rate I will hit my 20K visits limit in less than 10 days (versus the 30 I am allotted) bumping it up to $60 per month for 40,000 unique visits—and at this rate I would even hit more than that, pushing me into the $100 Business plan range. Yowzers! I was interested in where all the traffic was coming from, and it is bizarre, as you can imagine. All I can say to all you traffic hounds out there is make more GIFs! ?

My high-res Apocalypse Now GIF from 2011 was hit 56 times and required a whopping 755 MB of bandwidth.

God the bava is unsustainable! But even more surprising is the following image of the Baltimore Police Department putting guns and money on the table being hit over 1400 times in less than 24 hours! WTF! wire106 #4life

It is a strange world, but getting these insights from Kinsta’s analytics is kinda cool, and it reminds me that the bava is its own repository of weirdness outside the social media silos—“ah! how cheerfully we consign ourselves to perdition!” I still have to get my SSH keys set, which I discovered is possible…

Oh yeah, one more thing. I was also concerned about hitting my storage limits given my plan limits me to 10 GB, and when I did a df -h it looked like I was using 13GBs. 

I jumped on support again, this time with Salvador, and he also ruled—their support is super solid, which is always a good sign. He gave me a different command to run in www, namely…

du -h -d 4 

Which gave me what I needed, 9.2 GBs, just under the wire:

And now I need to find a way to offload some of the media serving given it will quickly make Kinsta prohibitive in terms of costs, but I have thoroughly enjoyed their dashboard, and the laser-like focus of  creating an entire hosted, optimized experience  and environment for one tool.

*Kinsta uses Intercom for online chat support, which is a tool Reclaim had for about a year or two in 2015 and 2016 I believe. We did chat support when it was Tim, myself, and Lauren, that was our team! It was hard, and the chat format invited folks to add 3 word issues like “My site broke” or “HELP me please!” Just the thing every support agent wants to see ? I was mindful of this and tried to be kind and give details and be patient, but the on-demand model can be rough. And I know folks are thinking of that as one way to imagine managing stuff online in times of crisis, but if Reclaim’s experience with chat is at all telling, resist the urge!  That said, Ruby and Salvador were there and helped and I appreciated it tremendously, so who knows. But my gut tells me if you have not done web hosting support for the last 10 years and are not prepared with definitive questions and have done your own troubleshooting you are in for a world of back-and-forth pain

Intentional Learning at Reclaim Hosting: Back to the bava Basics or, Blogging about WordPress

A blog title so long it might as well be a tweet….but it’s not, it’s a god-damned BLOG! I’M BLOGGING, HOLD ALL MY CALLS!!!

Feel the burn, THE BLOG BURN! 

So, I got that out of the way, but it is a reflection of how fired up I am these days. Let me start by saying that Reclaim Hosting has been pretty awesome. Meredith has stepped into the role of Support Manager brilliantly, and Lauren continues to rule as Director of Operations, add to that our part-time support hire last Spring, Chris Blankenship, who has become the systems administrator Tim and I have been dreaming of! And just a few short months ago we got lucky enough to bring on Gordon Hawley has come to us with decades of support experience in the field and has fit-in seamlessly and proved to be an immediate win. What’s more, we recently hired soon-to-be UMW alum Kathryn Hartraft on a part-time basis, and she is proving why UMW’s Digital Knowledge Center is ground zero for recruiting fresh Reclaim talent. I am not gonna lie, the gritty and grounded UMW students have been an absolute boon to Reclaim Hosting since the beginning, and I am feeling ever more confident that they can run the ship without Tim and I—they are that good! 

It’s been rewarding to see the team congeal so well over the past two months, and I think that has given all of us room to start becoming more pro-active about filling in gaps we have in our collective knowledge. Out of which the idea of a more intentional learning program at Reclaim has emerged. The idea is simple: we take a topic for an entire month and recommend various readings and tasks around that theme, and folks are then expected to explore it and then narrate their learning on their blog. For example, in January we focused on migrations (a chore I have become all too familiar with these days) and this month has been dedicated to file and directory structures in cPanel. In fact, last night we had our first meeting with the entire team in a long while in order to reflect upon and wrap-up February’s learning, while at the same time introducing the next month’s topic. Meredith brilliantly wrapped up the month on file structures with a discussion around what we learned in the first hour, and I introduced the coming topic for March: all things WordPress! 

As usual I was ill-prepared for anything formal, so I basically tried to reinforce the fact that despite the haterz, WordPress still rulez! And while it only powers 35% of all sites on the web, that figure is closer to 90% of Reclaim Hosting’s users. So being familiar with the ins-and-outs of WordPress is a must for our team. It doesn’t hurt that I cut my teeth on WordPress and it has been very, very good to me over the years. I often think beyond marrying Antonella or teaming up with Tim as a business partner, choosing WordPress over Drupal was the best decision I ever made ? What’s more, I spent many, many years on this blog trying to get folks in higher ed to take it seriously as a viable alternative to the LMS. In fact, bavatuesdays was one of WordPress’s earliest and most passionate promoters, and as is often the case I wasn’t wrong ?

So, I am fired up because the month of March on bavatuesdays will be a kind of homecoming for some WordPress blogging that I’ve not done in earnest for near on 7 or 8 years. In many ways WordPress has become invisible for me; it is the air I breath. That said, I do recognize I’m becoming a bit rusty when it come to seemingly endless possibilities of hacking it to do your will, but that’s why I still read bionicteaching and the cogdog religiously. So, the Reclaim Hosting team will be blogging about WordPress throughout the month of March, and if you are so inclined blog along with us and share it at the #reclaimWP tag on Twitter and/or drop your feed in the comments below and use that tag so I can pull them into our main site aggregator.

WordPress Multisite: Multi-Network versus Multiple Independent Networks

One of the things we find ourselves doing more and more of at Reclaim Hosting is managed hosting, in particular for WordPress Multisite (WPMS). In the end was the beginning for this blog. So, I was on a call last week were the discussion around running multiple, independent WPMS instances versus one WPMS instance with multiple networks, i.e. and represent two functioning WPMS instances using subdomains (or subdirectories) such as or that both point and share one set of core WordPress files. I experimented with this over 10 years ago by running a WPMS (then called WPMU) service for Longwood University off the core WordPress files of UMW Blogs. I thought it would be revolutionary for the ability to share infrastructure across Virginia public institutions of higher ed, but not so much. That said, I was glad to see Curtiss Grymala to take the whole idea of multi-networks to the next level for UMW’s main website.

Anyway, enough about the past, that was then, this is now …. for now. The question is why would you run several independent WPMS instances with distinct core files versus running multiple instances of WPMS off of one shared set of files, plugins, themes, etc.? For me the value of running everything off one shared set of files was shared themes, plugins, and updates that make management easier than across numerous separate installs.*  Another benefit was a single space for site/user administration between networks. Additionally, managing single sign-on through one instance should prove a bit easier for setup, but will need to double-check on this one. I also know you can have various portals for each WPMS network mapped on a single set of files, so it will not be confusing for the users, for them the fact they share core files will be invisible. So, in this regard the choice comes down to whether or not consolidation makes sense for the WPMS admin, which is often a question of convenience.

But there may be some practical reasons not to use a multi-network setup. Like, for example, if you are planing on running thousands of sites on each of these WPMS instances you may want to keep them separate given scaling issue with the WPMS database.** Having three WPMS instances share core files means if one goes down, they all go down, which can be an issue. Also, if you have an existing WPMS site you want to incorporate into an existing multi-network setup it may get tricky depending on whether there are shared users across the various instances of WPMS that you’re combining. I will have to do more research here, and would love to know about anyone’s experience in this regard, but I imagine users across a multi-network instance would need to be able to access the various networks with the same email/username across networks for the sake of both convenience and single sign-on (which are often one in the same).

Which raises another question that I’m unsure of,  if users sign-in through one network of a multi-network setup can they cleanly move between sites on different networks? I’m wondering if keeping single sign-on and users separate in this instance may prove less problematic in the long run. I’ll be working through these scenarios this week, but wanted to post this here cause I know a few folks have experience with running multi-networks on bit sites and wanted to be sure I was not overlooking any major red flags before making some recommendations.

*It also allows you to share any premium themes or plugins across one instance.

**Although if this is the case you will have to shard databases anyway, so one could argue it would be easier to do that for one instance rather than many.