Reclaim Cloud: Accessing Databases

Previously on Reclaim Cloud Learning; I talked through working with SSL certificates on the Cloud and Mattermost training, and now I’m going to be talking about working through databases to round out the Reclaim Cloud month. This one, in particular, surrounds accessing databases (like WordPress, or Mattermost) without the confirmation emails from Jelastic. I’ll walk you through getting access to the database node from your Cloud Dashboard without your database credentials. Reclaim Hosting uses this method on our support, so we don’t have to reach back to the user to see if they’ve saved the credential emails. It helps us save time going back and forth and getting to the root of an issue quicker.

After you install an application to your environment, say through WordPress or Mattermost, you’ll receive a few confirmation emails with passwords to various portions of the site. You’ll want to save these credentials but delete the email. You can use this method as well to access your node through Reclaim Cloud.

Accessing the Database Node

First, however, the biggest step is to access the database node within your browser. You can access this a number of ways, first by going to the URL and using the specific port number. So for instance WordPress you can use env-3224720.us.reclaim.cloud:8443

You can also access it through a node if it is a separate instance within your environment like node9764-env-2948928.us.reclaim.cloud, you can use this when working with PostgreSQL too for Mattermost.

Once you’ve loaded the URL you should see a login screen:

Locating Credentials

If you don’t have the credentials that were sent via email after the Jelastic, you can also locate the credentials within the site’s configuration files.

You’ll want to navigate to the file management system within your Cloud Dashboard, but you can also use WebSSH if you’d like. This will walk you through the file management system.

WordPress

For WordPress, you’ll want to work through your wp-config.php file within /var/www/webroot/ROOT/wp-config.php

Connection settings should look like this:

// ** Database settings – You can get this info from your web host ** //
/** The name of the database for WordPress */
define( ‘DB_NAME’, ‘wp_7698727’ );

/** Database username */
define( ‘DB_USER’, ‘jelastic-6599005’ );

/** Database password */
define( ‘DB_PASSWORD’, ‘vsecurepassword’ );

/** Database hostname */
define( ‘DB_HOST’, ‘127.0.0.1’ );

/** Database charset to use in creating database tables. */
define( ‘DB_CHARSET’, ‘utf8’ );

/** The database collate type. Don’t change this if in doubt. */
define( ‘DB_COLLATE’, ” );

You’ll use the DB_USER and the DB_PASSWORD to access cPanel from there to access PHPMyAdmin.

Mattermost

Mattermost is just a bit different! Since this is using PostgreSQL, the location varies on the file. You’ll want to navigate to the /root/mattermost/congfig/config.jsonfile.

The username, password, and database name are located on the DataSource line under SqlSettings. Should look like:

“DataSource”: “postgres://webadmin:nky9FicDb4@sqldb:5432/mattermost?sslmode=disable\u0026connect_timeout=10”,

 

Once you’re logged in you’re good to go! You can make changes to the database like you would through the user interface, like changing a siteURL or homepageURL or grabbing an export of the database.


Featured Image: Photo by Henry Dick on Unsplash

Reclaim Cloud: SSL Certificates

SSL certificates, oh SSL certificates. Where to begin? These little certificates help hold the big web securely together to protect websites. But they are finicky. Reclaim Hosting uses Let’s Encrypt across our infrastructure and using it on Reclaim Cloud was a no-brainer. Jelastic partners with Let’s Encrypt to bring SSL certificates as an addon to most environments.

This week was all about SSL certificates. And continuing on-trend with Reclaim Cloud learning, specifically using a custom wildcard SSL certificate for a few WordPress Multisite’s Reclaim Hosting manages.

I’d always issued SSL certificates through Let’s Encrypt in cPanel or within the Addon feature in Reclaim Cloud. Let’s Encrypt makes it super easy to work with the SSL certificates by provisioning and renewing automatically.

Let’s Encrypt Addon

Sometimes though, the SSL certificate doesn’t provision properly. So a quick tip I found (thanks to Goutam!) you can add your custom URL to the Let’s Encrypt add-on. So while your URL is active and online, Let’s Encrypt will need a “refresher” to issue the certificate to that particular URL.

 

If the URL is not listed you can add it to the external domains section and apply the setting. You will need to update the SSL certificate from here.

Custom SSL

The next options through Reclaim Cloud for SSL is a custom SSL certificate. You can purchase the SSL certificate from an external company and work with that with your environment. I found that the SSL documentation on Jelastic was super helpful in this capacity. We recently had to add an SSL certificate to cover a wildcard subdomain, for a WordPress multisite.

The custom SSL certificate needs 3 items to implement, a server key, intermediate certificate, and the domain certificate.

First, we needed to generate the certificate signing request (CSR). This is  done through a program like OpenSSL and it runs through the WebSSH feature for the environment. Once we have the CSR, we’ll receive a server key. The server key is uploaded to our environment, then send the user the CSR.

Then the user will use the CSR to generate the Intermediate Certificate and finally the Domain Certificate. The Intermediate Certificate is used with the provisioning company to ensure they’re verified to issue the SSL certificate to Reclaim and to the user. These are then sent back to Reclaim to upload to the environment.

Once all 3 items are in place, we can issue the SSL certificate for the environment. We did run into the issue where we needed to reissue the Let’s Encrypt plugin to cover the main URL on the WPMS from there.

Typically SSL certificates can last 3months when working with Let’s Encrypt, or 1 year+ when working with another company. Let’s Encrypt renews automatically while the third party certificate will need to be updated manually.

This process was super interesting to learn, as I’ve only worked with SSL through Let’s Encrypt previously and realized how easy it is to provide your own certificate through a third party if needed!

Photo by Wolf Zimmermann on Unsplash

Reclaim Cloud Training: Mattermost

One of my goals for 2022 is to spend some more focused time learning as much as I can on Reclaim Cloud. I’ve realized more (more now lately) that I am completely new to this mystical “Cloud” and all that’s possible with it. I initially took some training when we first started to get prepared on admin side of Reclaim Cloud where we can manage any user management and billing.

With the admin side training, I am very new to everything Cloud related. And with the Cloud, the possibilities are endless, which is super daunting. But I’ve been challenging myself to take some of the Reclaim Cloud tickets that come through support to tackle learning a bit at a time.

This first started when Jim was working on a Mattermost upgrade for a client. We’d been working on getting the container software up to the most recent version of Mattermost.

So Taylor, Jim and I jumped in a huddle in Slack (sidenote: this is the best feature in Slack by far, I’ve been using this a ton in the last couple of weeks) to look over getting the older version of Mattermost migrated to a new container without losing content. And after an hour and a half of talking and brainstorming, we made some headway!

I was in completely new territory. The core functions of Mattermost require MySQL or PostgreSQL as the database. This install was set up using PostgreSQL, which is not a database I’ve worked with until this point. We needed access to the database to export a copy as we migrate to the new clone. We first tried to run the commands to import/export from the WebSSH console, but quickly realized that we’ll need the username and password for the database in particular. The only problem, we didn’t have the username and password to access PostgreSQL.

The user is emailed the username and password when the Postgre database is set up on the environment but to eliminate some back and forth with the customer we decided to look through the config files to find the username and password. This is one feature cPanel does really well, adding an SSO aspect to PHPMyAdmin, that I’ve taken for granted. But I’ve used the config files to gain access to the databases for WordPress of Omeka migrations in the past, so I figured we could use the same method for Mattermost! There’s a config file somewhere right?

Looking through the Mattermost documentation, I was able to locate the configuration files necessary to grab the database user and password! Ah ha! Once we had that we could grab an export of the database within phpPgAdmin. After the export happened, it was smooth sailing from there, after a failed attempt to rsync from a docker container, we quickly realized we could SCP to a local machine and back to the final location.

Now for future Meredith to refer back to– some documentation we used during the migration:

I later had to jump off the huddle to work through the Support queue but I would say this was a massive success!! I found that this was by far the best Cloud learning experience I’ve had to date, and I’m super excited for more to come.

 

Featured image by C Dustin on Unsplash

Updating a Ghost Docker Container on Reclaim Cloud

One of the things I wanted to figure out after installing Ghost as a Docker Container on Reclaim Cloud was how to update it to subsequent versions. Below is a very short video on the process as well as the commands to do this via SSH:

Run the following command to see what user Ghost is installed as, keep in mind Ghost is installed at /var/lib/ghost using the official Docker Image:

ls -al /var/lib/ghost/.ghost-cli

The following result of the above command let’s us know node is the user in the group node:

-rw-r--r-- 1 node node 83 Jan 5 09:40 /var/lib/ghost/.ghost-cli

We then change the user node to a superuser with the following command:

sudo usermod -aG sudo node

And then update the user permissions:

sudo chown node:node /var/lib/ghost

This command changes the file permissions:

sudo find ./ -type d -exec chmod 00775 {} \;

You also need to update the node user’s password given you will be prompted for it:

sudo passwd node

After that, change to the node user:

su - node

Change into the ghost install directory:

cd /var/lib/ghost

And finally run the following command to update Ghost:

ghost update

This is where you will be prompted for the node user’s password:

? Sudo Password

If all goes well you should get something similar to the following output, but in this case Ghost was already up to date:
+ sudo systemctl is-active ghost_undefined
? Checking system Node.js version - found v14.18.2
? Ensuring user is not logged in as ghost user
? Checking if logged in user is directory owner
? Checking current folder permissions
? Checking folder permissions
? Checking file permissions
? Checking memory availability
? Checking free space
? Checking for available migrations
? Checking for latest Ghost version
All up to date!

Spinning Up a Ghost Docker Container in Reclaim Cloud

Reclaim Hosting is planning on using Ghost to power our monthly newsletters starting this month, so Pilot has been exploring installing it on Reclaim Cloud using the Marketplace app. Turns out it was out-of-date and not so easy to update, so we explored getting Ghost Docker image running in the Cloud using Docker Engine, which Pilot documented brilliantly on their blog. Turns out it might even be easier to install Ghost using Docker Hub on Reclaim Cloud, so the following video and guide will take you through that process.

So, in Reclaim Cloud you can create a New Environment:

After that, make sure you are on the Docker tab and click on “Select Image”:

Optionally, you can also rename the environment subdomain to something a bit more user-friendly:

After that you will be given a dialogue box with a search bar to find containers on Docker Hub. Type in “Ghost,” and the first option is the official image, which is recommended:

After that select the topmost Ghost image:

Then click “Next” and the Docker container will be selected. After that you want to ensure the public IP address is not selected for this container, and you can also choose from one of four regions to install to:

Once the Ghost container is set, you will need to add a Load balancer, which will enable the instance to have a Let’s Encrypt certificate for mapped domains. This environment will also have contain the public IP address, so be sure to turn that option on:

After that you can click Apply and the environment will be spun-up.

Once the environment is spun up, you will need to do two things. First, point an A record for the domain you want to map to the environments public IP address. I did this through my DNS settings in Cloudflare, and you can see it below:

After that, you will need to go to the Add-Ons icon for the Load Balancer container and configure the SSL certificate through Let’s Encrypt:

Once that is done you should be able to load Ghost at the mapped domain you specificed:

To configure Ghost you would need to go to yourdomain.com/ghost –good luck!

WordPress Multi-Region on Reclaim Cloud

First there was WPMu, then WPMS, and now WPMR!

Image of WordPress cluster diagram

Read more about the Multi-Region WordPress setup on Jelastic’s blog (click image)

WordPress Multi-Region is more a hosting than application specific feature, and to be clear this functionality is possible for applications beyond WordPress. But Jelastic, our Cloud Provider for Reclaim Cloud, has created a one-click application for installing a multi-region WordPress cluster that can replicate across various data centers in real-time. There are a few elements of this that are exciting as hosting providers:

  • With a one-click installer it’s easy to spin-up a complex WordPress infrastructure across numerous regions
  • It has the ability to route traffic so folks get less latency being able to access the instance closest to them
  • It bakes in fail over so that if one server in one region goes down the traffic is immediately redirected to another available datacenter to avoid downtime

These are all good reasons, but the last may be the most exciting because sites go down. Data centers catch fire, DDoS attacks happen, and servers will crash; it’s not a matter of if, only when. So, as more and more edtech infrastructure has become mission critical there needs to be options to route around that painful reality, and failover is just that: it replicates a single server setup across various data centers across various regions (US-West, Canada, UK, etc.) to ensure there isn’t one point of failure for a enterprise-level service. That’s pretty exciting given this is something we’ve been dreaming about at Reclaim Hosting for a while, and given we manage quite a few large WordPress instances, this could be an immediate options for folks that want to ensure uptime.

Image of Jelastic WPMR installer

The dialogue for the 1-click WordPress Multi-Region installer on in Reclaim Cloud’s marketplace

So, that’s the logic behind WordPress Multi-Region clusters, and while in Nashville for the Reclaim Hosting team retreat Tim started playing with this setup to test fail over. It worked in theory while we set it up, and then again in practice last week when our UK Cloud server had issues in the early morning. That reminded me that I was planning to play around with a WPMR setup for this modest standalone WP bava blog—cause the bava should never, ever go down … ever.  After that, I’ll see if I can make ds106 a multi-region setup over the winter break to get a sense of how it works with a fairly intense WPMS instance. So everything hereafter will be jotting down my progress over the last two days.

Diagram of an Maria DB Asynchronous Primary/Replica setup

I started with spinning up a multi-region cluster to host bavatuesdays. It was a 3-region cluster (US-East, US-West, and UK) and after figuring out permissions to rsync files across environments in Reclaim Cloud (it was harder than it should’ve been, thanks for the assist Chris Blankenship!) the migration was fairly straight forward. The Multi-Region setup across 3 regions has one primary cluster and two secondary clusters, and you rync the files to the primary application environment as well as import the database to that environment. Soon after that it syncs with the secondary environments, and like magic the replica clusters have all the files and database settings, posts, comments, etc., imported to the primary cluster. The replication happens in less than 60 seconds, so it might say asynchronous, but it ‘s all but immediate for my purposes.

image of bavatuesdays blog running on bavafail-1 .us cluster

bavatuesdays blog running on bavafail-1.us cluster

I did get bavatuesdays.com running in a WPMR setup for several hours yesterday while experimenting, but had to revert to the stand-alone instance given I ran into an issue creating new posts that I’m still investigating. But as you can see above the blog is running on the domain bavafail-1.us.reclaim.cloud, and there was another instance at bavafail-2.wc.reclaim.cloud, and a third at bavafail-3.uk.reclaim.cloud. You can see from the URLs they are in different regions, US (East coast), WC (US West Coast), and the UK. These all worked perfectly, and the way to have them all point to bavatuesdays.com was to add the public IP from the load balancer for each of the different regional clusters as an A record in your DNS zone editor.

Example from Jelastic's blog about adding A record for each WPMR cluster public IP address in Cloudflare

Example from Jelastic’s blog about adding A record for each WPMR cluster public IP address in Cloudflare

Reclaim Cloud provisions the SSL certificates, and after clearing the cluster’s cache the 3 sites were loading as one, with failover and regional traffic routing working well. It was pretty awesome, but there was one small issue, I could not create new posts, which is kind of a deal breaker for a blog. So I had to revert to the old server environment until I figured that issue out.* I was using the failover and routing baked into Jelastic’s setup seamlessly, but wanted to test out Cloudflare’s load balancing as well, but I’ll save those DNS explorations for another post. That said, Jelastic lays out the possibilities in their post on DNS load balancing for WordPress clusters quite well.

After setting up the A records and issuing SSL certs the bava was beaming across 3 regions. And when I turned one of the three regional clusters off, the site stayed online—so failover was working! The one issue that was also the case when Tim tested in Nashville is that when the Primary cluster goes down the secondary clusters are supposed to let you write to them. In other words, the WP authoring features accessed at /wp-admin should only work on the Primary cluster by default, but if it were to go down one of the other two secondary clusters should allow you to write.  This would not only keep the site online, but also allow posting to continue without issue, all of which should then be synced seamlessly back to the primary cluster once it comes back online. I was not able to get this functionality to work. After stopping the primary cluster, the secondary clusters would throw 500 internal server errors when trying to access /wp-admin -so that is another issue to figure out.

I have since spun down the bavafail 3-region test instance after hosing the application servers trying to downgrade PHP from 8.0.10 to 7.4.25 to test out a bad theory, so the first attempt of operation bavafailover with WPMR is dead on the operating room table. Although hope springs eternal at the bava, so I have plans to resuscitate that WPMR setup given I believe it’s a permissions issue—which means I’ll be bothering Chris again.

Image of bava.rocks failover test site

bava.rocks failover test site

In the interim, however, I’ve spun up a two-region WPMR setup using the domain bava.rocks as a way to ensure adding new posts works on a clean instance (it does), and also to see if you can access the secondary clusters to write to the database when the primary is down (you can’t), so there is still definitely more work to do on this, but it is really exciting that we are just a couple of issues away from offering enterprise-level traffic routing and fail over for folks that need it. Reclaim Cloud is the platform that just keeps on giving in terms of next-level hosting options, and I love it.

________________________________________________

*I was running into the same critical error that folks mention in this forum post, but after downgrading PHP versions from 8.0.10 to 7.4.25 on the WPMR cluster everything broke. I then tested PHP 8.0.10 on my LEMP environment for bavatuesdays (not a WPMR setup) and that worked fine. So not sure if it is specific to the WPMR setup in Jelastic, which uses LiteSpeed whereas my current blog uses Nginx, but this is something I am going to have to revisit shortly.

Some Notes on Migrations

This post will be as much about thinking through account migrations for Reclaim Hosting, as trying to capture some of the technical aspects of moving sites to Reclaim Cloud. In fact, it promises to be all over the place, but that is the prerogative of this blog and that’s why I love it so.

Migrations: the act of moving people’s shit from one server to another.

The vernacular for what I am talking about here when saying migrations, not pretty but true.

Domains migrations: This can be a very straightforward process, for example when we have someone on one of our school cPanel accounts that wants to move to our shared hosting. CPanel has a transfer tool baked in and we can move accounts between servers within seconds (assuming they are under 10 GBs or so), and after that just make sure all the details in our client management software WHMCS are aligned and we are good to go. What’s more, migrations like this are easy enough that there is no charge for anyone migrating from a Domain of One’s Own school to our shared hosting.

Third-party free site migrations: There are too many of these to list, but a few popular ones are WordPress.com, Wix, Weebly, and Squarespace. Interestingly enough the only one of these listed with anything resembling a migration option is WordPress.com. You can export and import the posts, pages, media, and author data, but you have to re-build the site design with appropriate themes and plugins. All the other services would be a straight-up copy and paste of page content which should tell you everything you need to know. No HTML files to download, no easily accessible media, no database … nothing. Say what you will about WordPress, but at least it’s an ethos.  WordPress.com migrations are fairly straightforward, you just need to prepare folks that some plugins and themes on wordpress.com may not be readily available for free outside that space (I still hate the plugin and theme marketplace and always will). These migrations usually cost $25.

Everything else: Pretty much everything after those two categories is a crap shoot. We have done a fair amount of migrations from just about every host imaginable: Bluehost, Host Gator, Godaddy, Dreamhost, Webfaction, 1and1, etc. And while a few of these use cPanel (Bluehost, Host Gator and sometimes Godaddy) they’re by no means similar. It’s next to impossible to get a full backup from Bluehost without an upsell, Godaddy’s interface is as confusing as they come, and good luck making it through the advertisements in Host Gator. What’s more, if you live and die by the command line (which I don’t but should) getting SSH access is often another level of hell. Services like Webfaction (soon to be gone) and Dreamhost are better in that regard, but given they run their own hosting software there is no straightforward migration path, so the migrations are often manual, and if you have an account with 5-10 sites, that is 5-10x the work as one cPanel migration, which wraps everything up into one neat package.

So, long story short, these migrations are by definition more time intensive and as a result expensive. As a rule of thumb we charge $25 per site migrated in these cases, but as we have learned some of these services allow folks to run beefy sites on their shared hosting services, which is not something we can afford to do. For example, we limit our accounts to no more than 100GB of storage for a shared hosting account, and no more than 1 GB or total server resources. For some sites that want to come over to our shared hosting these limitations will be a hard stop given the amount of storage and CPU resources needed, so that raises two crucial questions before a migration like these even starts: 1) how much data?, 2) how many resources? A few others is what PHP versions they are running and whether or not they are running the latest version of the application (issues with folks needing to run older apps on older versions of PHP is always a red flag).

I’m sure there are other variations, but for sake of memory and dragging this post out I’ll leave it to these three categories, and taking the last as an example of how a site previously run on webfaction‘s shared hosting needing to be migrated to Reclaim Cloud. The site in question had 170 GB of data, a 2 GB database, and was running Drupal 7 on PHP 5.6. The storage was an immediate flag for our shared hosting, and while previously we would point folks to managed hosting (which can run as much as $400 per month), Reclaim Cloud offers a much more affordable, albeit unmanaged, option. Storage is quite cheap at .08¢ per GB per month, or less than $1 per 10GB per month. Also, for large sites with regular traffic and a long history Reclaim Cloud provides dedicated resources wherein you can reserve up to 2 GB of CPU but allow your instance to expand to 4 GB or more if need be, while only paying for those resources if and when needed.

On the Cloud we are able to install a container-based full-stack LiteSpeed server, also known as LLSMP, that is optimized for a PHP app running LiteSpeed (a drop-in replacement for Apache) that also gives the user root access to only that container. So, the client gets more storage, more resources, root access, and an overall more secure experience for roughly $50 per month (this is based on using 10 cloudlets, 150GB of storage, a dedicated IP address, and the LiteSpeed license). What’s more, you have the option to scale instantly should that be of concern.*

So that’s the argument for the Cloud in this case, and it really is a good solution when it comes to speed and experience, and the reason I even took on this migration was it would force me to get more familiar with Reclaim Cloud, in particular creating a LLSMP environment and importing a large Drupal instance. As I predicted, these migrations are never simple, and one of the trickiest pieces beyond understanding what environment you are coming from and where it is going to, is making sure the DNS points from one server to the other cleanly, more tears have been shed over DNS in the previous 8 years than I care to acknowledge in this post.

That said, here comes the notes part of this post because I’ve learned a few things here that I will be referencing in the future, cue blog as outboard brain.

LLSMP was dead simple to setup on Reclaim Cloud, I installed 6.0.2 and ran PHP 7.3.27 and once that was done I was able to login via the web-based SSH and start migrating the files to /var/www/webroot/ROOT

I ultimately had to enable root access on the container and thankfully Webfaction provides SSH access to their server, so most of this migration was done using command thanks to the rsync command, which is amazing. Logged into the Reclaim Cloud I ran the following command to sync files from webfaction:

rsync  -avzh user@user.webfactional.com:/home/user/webapps/app /var/www/webroot/ROOT

Thanks worked cleanly, then I needed to grab a dump of the database on Webfaction, which this worked for:

mysqldump -u db_username -p db_name > database.sql

After that I rsynced it to the Reclaim Cloud instance

rsync  -avzh user@user.webfactional.com:/home/user/database.sql /var/www/webroot/

After that I had to create the database user, datamase, and create privileges via command line, cause I am kind of a big deal. Well, I thought I was until I hit my first snag:

ERROR 1045 (28000): Access denied for user 'user_db'@'127.0.0.1' (using password: YES)

This is where I reached out to help from my Reclaim Hosting colleagues, and the always awesome Chris Blankenship bailed me out with some detailed instructions on how to fix this in Reclaim Cloud:

MySQL actually sees db_user@localhost and db_user@127.0.0.1 as two separate accounts, which can cause problems. cPanel handles this automatically by creating both for all db users, but you’ll have to manually create both in Jelastic containers; so like this:

CREATE USER 'db_user'@'localhost' IDENTIFIED BY 'securepassword';
GRANT ALL PRIVILEGES ON db_name .* TO 'db_user'@'localhost';
CREATE USER 'db_user'@'127.0.0.1' IDENTIFIED BY 'securepassword';
GRANT ALL PRIVILEGES ON db_name .* TO 'db_user'@'127.0.0.1';

To make it simple usually add skip-grant-tables under the mysqld section of /etc/my.cnf, restart mysql (systemctl restart mysql), log in as root without the password. From there I run this:
FLUSH PRIVILEGES;

Followed by those commands above. Then I comment out, skip-grant-tables under  the mysqld section of /etc/my.cnf, restart mysql (systemctl restart mysql) again.

Once I figured out the permissions i was able to import the database.sql file using the following command from the /var/www/webroot/ directory:

mysql -u db_user -p db_name < database.sql

Once that imported I did a final rsync of files using the following command, with the -u flag to skip files that are newer on the destination.

rsync -avzhu user@user.webfactional.com:/home/user/webapps/app /var/www/webroot/ROOT

There was also the bit where Chris updated ‘localhost’ to ‘127.0.0.1’ in the settings.php file for the Drupal instance given Reclaim Cloud is particular.

So those are very specific notes for this migration of a larger PHP application to a Reclaim Cloud instance, what’s more I had to do it again a week later given this was just to test the instance before moving the production site (this is where rsync is very useful, although the SQL dump had to be re-done though). As you can tell by now, this is not a $25 migration, this requires spinning up a server, syncing files between servers, and providing a testing environment. Luckily Reclaim Cloud environments automatically have a test unique URL that the mapped domain overwrites (namely something like site.uk.reclaim.cloud as opposed to site.com) that makes testing the environment easy before pointing DNS, which was quite convenient—even better than pointing localhost files.

Anyway, this is a long post about migrations and Reclaim Cloud, as much a series of notes as a way of narrating what I hope will be a deeper dive into the possibilities of Reclaim Cloud over the next 12 months or so given i have been freed up from other responsibilities, but more on that in my next post.
__________________________________________

*The hard part of the Cloud to wrap your head around is the variable pricing, I know it does remain fairly consistent from personal experience, but need for predictability is what Digital Ocean understood and has seemed to figure out, which I admire.

Domains21: Jelastic – a Look at the Technology Behind Reclaim Cloud

Keeping up with my OERxDomains21 syndication series, here is a another great session featuring Jelastic founder and CEO Ruslan Synytsky whochats with Tim Owens about all things Cloud.

In the Summer of 2020 Reclaim Hosting rolled out Reclaim Cloud -a next-generation hosting platform that allows faculty, students, and staff at educational institutions to run complex technology stacks with the click of a button. It’s a brave new world of virtualized, containerized infrastructure that in many ways changes what’s possible for ed tech and higher ed IT groups around the world.

Tim Owens chats with Jelastic founder and CEO Ruslan Synytsky about their cloud platform software and how it has enabled hosting companies like Reclaim Hosting to provide its customers a sophisticated and elegant cloud solution that provides them access to a whole suite of next-generation applications.

Reclaim Cloud’s Got GLAM

I’ve been following Australian historian and hacker Tim Sherratt on Twitter for a while now, and his work with the GLAM Workbench is inspiring. GLAM is an acronym for galleries, libraries, archives, and museums, and the workbench provides a series of tools Tim has stitched together to enable research across numerous collections in Australia and New Zealand so that scholars and students can do things with data.

I saw a mention of this work a few weeks back that piqued my interest, and the following tweet spurred me to follow-up on installing GLAM Workbench in Reclaim Cloud, so I gave it a shot.

I have to say it was quite easy to get up and running, and the documentation around this project is so robust that it also helped me finally get my head around how applications like Jupyter Lab, Datasette, and Voyant Tools might work together, which is huge for me.

It really was that simple, I created a new environment and used this script to import the YAML file with all the instructions for getting the custom Jupyter Lab notebook spun up. Literally one-click, which he has since integrated into the documentation so you can do this right from Github into Reclaim Cloud, which is so slick.

And as I noted, the JupyterLab was all set up and ready to go (it was behind a password given that was part of the customizations he built into his container):

The thing about the GLAM Workbench that pushed me beyond the straight install into exploring the app was the amazing documentation they created that seems to just be getting better.

I was able to wrap my head a bit around using Jupyter to run the Trove Harvester, which is essentially the tool that search across collections and brings back results, and then allows you to harvest text, images, and even PDF versions of the articles. All this is spelled out within the Jupyter Lab, and it allowed me to start digging in.

I did a search across the collections for references to home video rentals and got a solid 8000 hits. I’ll try and do a follow-up post about some of the awesome articles about home video in Australia, but let this page (and a few pull-out ads) suffice for now:

The article on silicon is pretty fascinating, but the ads tell a compelling story of the rise of the mom & pop video store. And this is just one of thousands, and tools like Datasette (which you can work with right from the GLAM Workbench) puts the search info into a database format, and something like Voyant Tools would enable you to visualize, so I actually started to wrap my head around this suite of tools.

This is amazing, but even cooler is that Tim has been hard at work and has updated the GLAM Workbench documentation to include a Launch in Reclaim Cloud link so that  script runs and you are up and running with GLAM Workbench in Reclaim Cloud, so cool.

And in all his copious spare time, he posted the details of his work creating an installer for GLAM Workbench on Reclaim Hosting’s Community forum which provides all the details, so thank Tim—this is above and beyond!

PeerTube, Sonic Outlaws, and UbuWeb

Over a week ago Tim got a one-click installer working on Reclaim Cloud for PeerTube. He got the details up on the Reclaim Hosting Community site already, so you can read more there.

PeerTube in Marketplace

PeerTube in Marketplace

Getting one-click installers working for a wide variety of apps is a big bonus of Reclaim Cloud, and between Azuracast and PeerTube we have the vertical and horizontal pretty well locked-in. I wrote a bit about my explorations with PeerTube already on this blog so feel free to follow that linked rabbit hole for more. But the long and short of this application is that you can upload videos to your own instance of a fairly robust Youtube-like interface. It has a growing peer-to-peer network, and one killer feature is that it can upload and archive just about any video on the web with a URL. I use it regularly to archive videos I watch online given the broken web copyright creates as a result of YouTube take-downs which highlights the worst of the service-centralized internet.

In fact, while Tim and I were working through the PeerTube installer I was watching the 1995 documentary Sonic Outlaws by Craig Baldwin. The copyright bugbear has been with us well before YouTube, and Sonic Outlaws focuses on the fallout of Negativland‘s  decision to parody U2.

Within days after the release of Negativland’s clever parody of U2 and Casey Kasem, recording industry giant Island Records descended upon the band with a battery of lawyers intent on erasing the piece from the history of rock music.

Craig “Tribulation 99” Baldwin follows this and other intellectual property controversies across the contemporary arts scene. Playful and ironic, his cut-and-paste collage-essay surveys the prospects for an “electronic folk culture” in the midst of an increasingly commodified corporate media landscape.

So, long story short, I wanted to see if PeerTube could use the YouTube-dl code to grab and upload the copy of Sonic Outlaws on UbuWeb, and turns out it can, the only thing is the metadata was not included, but that was fairly easy to fill in.

After that I got to thinking about the initial Tweet of this post from UbuWeb about downloading videos and not trusting the cloud.

I wonder if an application like PeerTube might help bridge that gap a bit by re-decentralizing the cloud so that folks could download and share collections like UbuWeb across numerous servers and local machines in order to not only build their own collections, but share them, and hopefully circumvent the copyright trolls that come with the territory of a centralized video service such as YouTube.

css.php