, , , , ,

Look a(nother) Ghost

Since May of 2014 I have been playing on and off with the blogging platform Ghost. It has been an on again off again affair, and I have never left WordPress for it, but rather use it as a test bed for exploring how Reclaim might host applications outside the LAMP stack—an ongoing theme for us over the last 3 or 4 years. So, I have been marking my progress with running Ghost both here on the bava as well as on my Ghost blog. I talked about the idea of this as the Next Generation Sandbox, experimented with getting Ghost running on AWS using Bitnami, feeble terminal work, setting up key pairs in AWS, moving to Reclaim’s container-based setup for a kind of multi-site Ghost, setting up mail for Ghost, and most recently using Cloudron to setup Ghost.

Seven posts over three years about (and on) Ghost is not that much in the end (running out of punny titles), but reading over them whiling writing this I realized there’s a lot of learning wrapped up in trying to figure out AWS, Bitnami images, command line, Docker containers, and Cloudron. All stuff I have been trying to focus on more an more, so this side site in many ways lives up to its subtitle: “Letters from the Cloud.” And I came back to it recently because while I blogged about setting up Ghost through Cloudron back in September, my Ghost instance on Reclaim had been terminated when we decided to no longer offer it through Reclaim Hosting. Given my Ghost blogging had been dormant for a while, I totally forgot I was hosting it through Reclaim and it vanished. Luckily I blogged everything on Ghost through the bava, so nothing was lost, and I had backups of all images, etc. So, I used the occasion of things finally slowing down at Reclaim Hosting and my being under the weather to finally get BavaGhost back online, and now it is!

Read more

, , ,

And you get a server, and you get a server, and you….

I have been remiss in responding to Keegan’s post in early August exploring the idea of “A Server of One’s Own,” but I have not forgotten it. In fact, what he outlines in that post is something that dogs me regularly. Namely, how can we provide more options for folks when it comes to hosting a more diverse array of applications beyond what Domain of One’s Own currently provides.

Let me explain. As it stands right now, Domain of One’s Own has definitive technical limitations given it is built around a LAMP server environment. What does that mean? Well, it means beyond HTML, you are pretty much limited to PHP, Python, and Perl scripting languages. Also, it only supports the Apache web server software and MySQL (or MariaDB) databases. In other words, it is a specific server environment (a.k.a stack) that only supports specific applications. But given the wild success of PHP apps over the last 15 years, in particular WordPress, for most of us web plebeians that has been enough.

Read more

, , , , ,

A Domain of the Practical

tumblr_lsj3cvuLrW1r3h8j5o1_500

Adam Croom offered up a hypothesis in response to my post about the “Long Short History of Reclaim.” He argues that as much as Domains at the University of Oklahoma is deeply embedded in a philosophy of empowerment, ownership, and experimentation, it’s also extremely useful. Who knew?!

OU Create for us has became a practical tool for our community as much as philosophical one. It is indeed an infrastructure that makes building full websites possible to a much greater audience.It also gives us enough slack to build in a plethora of digital literacy components. This complexity is highly valuable in serving a range of needs.

I think the practical component of folks having their own space to publish easily to the web has been a huge draw. Tim has made the whole experience so seamless and dead simple that someone can literally help themselves to an Omeka or WordPress instance (or both) on a brand new domain in seconds. This is where the practical meets good design to make a near perfect marriage. When you take someone through a demo they’re incredulous, “That’s it?” And we’re convinced we can make it even more streamlined. While we’re driven by the ideals undergirding reclaiming the web, we are also deeply conscious of the fact that good design with practical applications will make that vision a reality quicker than any of the rhetoric.

Another interesting post that dovetails with this idea is the great Tony Hirst’s “Getting Your Own Space on the Web.”  Tony acknowledges the value of offering a space to folks who want to assume the responsibility of running their own applications for publishing to the web. But what about those who don’t?

What if you only want to share an application to the web for a short period of time? What if you want to be able to “show and tell” and application for a particular class, and then put it back on the shelf, available to use again but not always running? Or what if you want to access an application that might be difficult to install, or isn’t available for your computer?

I would add to this, what if the application you want to install doesn’t run on the widely popular LAMP stack we’ve built Reclaim Hosting on? This is where Tony’s explorations of virtualized server environments and containers over the last year have been fascinating. Tony has traditionally been the canary in the coal mine when it comes to pushing innovative edtech. The work he’s been doing and questions he’s been asking fit well with the work Tim and I having been pushing on for over a year (with some serious help from Kin Lane). How does this personal webspace also include virtualized apps and containers glued together with APIs to enable experimentation with a wide range applications across a variety of server environments and dependencies for short (or long) periods of time. How do we start realizing the possibilities of server infrastructure as a teaching and learning utility we can count on for fast, cheap, and out of control edtech?

Tony is thinking hard about how this effects deploying educational software for distance and online education, his role—assumed or official I don’t know—at the Open University. That practical use case provides some truly compelling challenges and possibilities for such work. The issue remains that it’s still not easy to work with virtual servers and containers, though Docker hosting services like Tutum are beginning to make some real headway in this regard. As my time at UMW comes to a close, more and more of my attention and focus will be pointed at this emerging virtual architecture of edtech, and what it might means in terms of the work we do at Reclaim.

, , , , , ,

Abstractions: Running WordPress Multi-Site using AWS, Docker, and BTSync

Heads up: this is not a technical run through, but more of a conceptual overview. Apologies if you came here looking for a how-to. Hopefully we will have just that in the next few months.

But enough about the past, let’s talk about the future!

aws_wpms_setup

This past week Tim Owens and I went down to VCU’s ALT Lab to meet with Tom Woodward, Jon Becker, and Mark Luetke about the work they’re doing with Ram Pages. I already blogged about a couple of plugins they created for making syndication-based course sites dead simple. We also got to talking about some of the ways we have been using Amazon Web Services (AWS)to scale UMW Blogs. At this point Tim took us to school on the whiteboard explaining a possible setup he has been imagining, which is still fairly experimental.

Don’t let Tim fool you, he is DevOps #4life now. He can be found in his spare time watching presentations about load balancing a site for a billion users or scaling infrastructure for small services like Netflix. I’m becoming more and more interested in infrastructure discussions because they highlight interesting trends in the shifting nature of tech that deeply effects edtech, such as virtualization, containers, and APIs.

image

Anyway, the image above is a look at a potential setup for a large WordPress Multisite instance on AWS. It has a couple of elements worth discussing in some detail because I want to try and get my head around each of them. The first is a load balancer that runs in its own EC2 instance.

loadbalancer

What the load balancer does is direct traffic to the Ec2 instance running the WordPress core files with the least load. So if you have four EC2 instances each running WordPress’s core files, the one with the least usage get the next request. Additionally, if all the instances have too great a load another could, theoretically, be spun up to meet the demand. That’s one of the core ideas behind elastic computing. The load balancer Tim used for UMW Blogs was HAProxy.

image

As mentioned above, you can setup a series of instances of EC2 on AWS with the core WordPress files save the wp-content directory, which is the only directory folks write to. But you will notice in the fourth instances Tim switched things up. He suggested here that we could have an EC2 instance running Docker that could then run several WordPress instances within it. What’s the difference? And this is where I am still struggling a bit, but from what I understand this allows you to spin up new instances quicker, isolate instances from each other for more security, and upgrade and switch out instances seamlessly. It effectively makes WordPress upgrades in a large environment trivial.

image

We have yet another EC2 instance that is the Network File Storage, this holds the wp-content files. The uploads, plugins, themes, upgrades, etc.  And each of the above instances share this instance. The all write to this, but one of the issues here is that this can be a single point of failure, kinda like the load balancer. So, Tim suggested there is BitTorrent Sync (BTSync), which I still don’t totally understand but sounds awesome. It’s basically technology that synchs files from your computer to spot on the internet, or between spaces in the internet, etc. So, what if we had several bucket where the various instances of WordPress core files were writing the upload files, themes, plugins, etc, and those buckets used BTSync to share between them almost immediately. So then you wouldn’t have a single point of failure, you would have the various instances writing to various buckets of files that would be constantly synching using the technology behind BitTorrent. Far out, right?

btsynch

BTSync provide ability to immediately copy and synch files across several buckets of the same files that get written to regularly.

Another option, and I think this was before we started talking about BTSync, but not sure if this would be in possible in addition to BTync, is have the blogs.dir folder for a WordPress Multisite that handle all the individual site files uploaded be sent to S3, Amazon’s file storage service.

image

You get the sense that part of what’s happening when you move an application like WordPress Multisite onto AWS, or some other cloud-based, virtualized server environment, is each element is abstracted out to it basic functions. Core files that are read-only are separate from anything that is written to, whether that be themes, plugins, or uploads. Additionally, the database is also abstracted out, and you can run an EC2 instance on AWS with Docker containers each running MySQL (with a sharDB or HyberDB to further break up load) that can also replicate various writes and calls using BTSync? No single point of failure, and you greatly reduce the load on a WPMS which is completely I’m out of my depth here, but if I I accomplished anything here it might be giving you an insight to my confusion, which is also my excitement about figuring out the possibilities.

imageI have no idea if this makes sense, and I would really love any feedback from anyone who knows what they are talking about because I’m admittedly writing this to try and understand it. Regardless, it was pretty awesome hearing Tim lay it out because it certainly provides a pretty impressive solution to running large, resource intensive WordPress Multisite instance.

, ,

Duke’s Website has Gone Docker

I was excited to see Tony Hirst retweet the news that Duke University’s website is being run in a Docker environment, and it could even be served through Amazon Web Services. Chris Collins, senior Linux admin at Duke, wrote about “Using Docker and AWS to Survive and Outage” they had as a result of DDoS attacks on their main site back in January. I love the way he tells the story:

While folks were bouncing ideas around on how to bring the site up again while still struggling with the outage, I mentioned that I could pretty quickly migrate the site over to Amazon Web Services and run it in Docker containers there. The higher-ups gave me the go-ahead and a credit card (very important, heh) and told me to get it setup.  The idea was to have it there so we could fail over to the cloud if we were unable to resolve the outage in a reasonable time.

TL;DR – I did, it was easy, and we failed over all external traffic to the cloud. Details below.

He goes on to describe his process in some detail, and it struck me how the shift in IT infrastructure is moving, and also made me wonder how many IT organizations in higher ed are truly rethinking their architecture along these lines. It’s one thing to push your services to a third party vendor that hosts all your stuff, it’s all together different to bring in a team that understands and is prepared to move a university’s infrastructure into a container-based model that can be hosted in the cloud. Not to mention what this might soon mean for personal options, and a robust menu teaching and learning applications heretofore unimaginable. This would make the LAMP environment options Domain of One’s Own offers look like Chucky from Child’s Play Duke’s Website has Gone Docker

I know Tim and I are looking forward to thinking about what such a container-based architecture might means for an educational hosting environment that is simple, personalized, and expansive. Tim turned me on to Tutum recently, which starts to get at the idea of a personalized cloud across various providers—something Tim Klapdor gets at brilliantly:

MYOS is very much the model the Jon Udell laid out as “hosted life bits” – a number of interconnected services that provide specific functionality, access and affordances across a variety of contexts. Each fits together in a way that allows data to be controlled, managed, connected, shared, published and syndicated. The idea isn’t new, Jon wrote about life bits in 2007, but I think the technology has finally caught up to the idea and it’s now possible to make this a reality in very practical way.

His post on the topic deserves a close reading, and it’s the best conceptual mapping of what we might build I have read yet. I wanna help realize this vision, and I guess I am writing about Duke University’s move to Docker because it suggests this is the route Higher Ed IT will be moving towards anyway (sooner or later—which could be a long later for some Duke’s Website has Gone Docker ). Seems we might have an opportunity to inform what it might look like for teaching and learning from the ground floor. It’s not a given it will be better, that will depend upon us imagining what exactly a teaching and learning infrastructure might look like. Tim Klapdor has provided one of the most compelling visions to date, building on Jone Udell’s thinking, but that’s just the beginning.

, , , , ,

Dockers

s02e09_container 01

The above GIF is from an episode of The Wire during Season 2. The docks are ubiquitous in season 2, and this particular image is a visualization of a cloned machine that captures the vanishing container—presumably filled with illegal cargo. I’m fascinated by the representation of technology throughout the series, but season 2 in particular is really interesting. There’s the highlighting of a cultural move to digital cameras, the increasingly popularity of the web, GPS, and much more that’s constantly being discussed, but there’s also the radical changes to the physical technology of the dock. The first part of the following video features the presentation from Season 2, Episode 7 about the automation of the port of Rotterdam.

Frank Sobotka refers to this as a “horror movie” noting the eroding need of stevedores, but more generally labor. The automated container technology becomes a sign of labor’s vanishing past.

containers

At the same time the container systems that have redefined the way shipping works have metaphorically come to servers thanks to Docker.

Docker

To the degree I fully understand it, Docker provides an open platform for building, running, and shipping distributed applications. In other words, you can get a pre-configured container through Docker that has the proper server environment for running a specific application. For example, if you want to run the the forum software Discourse or the blog engine Ghost (which is what Tim Owens has figured out recently for Reclaim Hosting), we have a server that has the docker engine installed which allows us to quickly fire up different application environments and run them for anyone who requests it.

Docker

And we are grabbing those application images from an open repository of virtualized possibilities that helps us avoid become overly dependent on a closed platform like Amazon Web Services, which is a major bonus. Additionally, Tim is playing with Shipyard, which allows you to manage various containers and resources on your server. What strikes me about all of this is how the metaphorical language of docks, shipyards, and containers helps me wrap my head around this technology. What’s more, it’s cool to see it both through the eyes of Frank Sobotka and Tim Owens—two of my heroes :)