A Domain of the Practical

tumblr_lsj3cvuLrW1r3h8j5o1_500

Adam Croom offered up a hypothesis in response to my post about the “Long Short History of Reclaim.” He argues that as much as Domains at the University of Oklahoma is deeply embedded in a philosophy of empowerment, ownership, and experimentation, it’s also extremely useful. Who knew?!

OU Create for us has became a practical tool for our community as much as philosophical one. It is indeed an infrastructure that makes building full websites possible to a much greater audience.It also gives us enough slack to build in a plethora of digital literacy components. This complexity is highly valuable in serving a range of needs.

I think the practical component of folks having their own space to publish easily to the web has been a huge draw. Tim has made the whole experience so seamless and dead simple that someone can literally help themselves to an Omeka or WordPress instance (or both) on a brand new domain in seconds. This is where the practical meets good design to make a near perfect marriage. When you take someone through a demo they’re incredulous, “That’s it?” And we’re convinced we can make it even more streamlined. While we’re driven by the ideals undergirding reclaiming the web, we are also deeply conscious of the fact that good design with practical applications will make that vision a reality quicker than any of the rhetoric.

Another interesting post that dovetails with this idea is the great Tony Hirst’s “Getting Your Own Space on the Web.”  Tony acknowledges the value of offering a space to folks who want to assume the responsibility of running their own applications for publishing to the web. But what about those who don’t?

What if you only want to share an application to the web for a short period of time? What if you want to be able to “show and tell” and application for a particular class, and then put it back on the shelf, available to use again but not always running? Or what if you want to access an application that might be difficult to install, or isn’t available for your computer?

I would add to this, what if the application you want to install doesn’t run on the widely popular LAMP stack we’ve built Reclaim Hosting on? This is where Tony’s explorations of virtualized server environments and containers over the last year have been fascinating. Tony has traditionally been the canary in the coal mine when it comes to pushing innovative edtech. The work he’s been doing and questions he’s been asking fit well with the work Tim and I having been pushing on for over a year (with some serious help from Kin Lane). How does this personal webspace also include virtualized apps and containers glued together with APIs to enable experimentation with a wide range applications across a variety of server environments and dependencies for short (or long) periods of time. How do we start realizing the possibilities of server infrastructure as a teaching and learning utility we can count on for fast, cheap, and out of control edtech?

Tony is thinking hard about how this effects deploying educational software for distance and online education, his role—assumed or official I don’t know—at the Open University. That practical use case provides some truly compelling challenges and possibilities for such work. The issue remains that it’s still not easy to work with virtual servers and containers, though Docker hosting services like Tutum are beginning to make some real headway in this regard. As my time at UMW comes to a close, more and more of my attention and focus will be pointed at this emerging virtual architecture of edtech, and what it might means in terms of the work we do at Reclaim.

Reclaiming Marx’s Capital

On Friday an old friend and collaborator  from the CUNY Graduate Center, Chris Caruso, reached out to me about hosting David Harvey’s website. There is the awesome factor that Harvey is one of the foremost Marxist theorists in the world, and has been pretty smart about letting connected folks at the Grad Center help him get his ideas out to a broader audience via social media. He has 59K followers on Twitter, and his site is an ongoing stream of  his talks delivered regularly around the world. Additionally, the site houses two complete courses of Harvey teaching Marx’s opus Capital, Volumes 1 & 2 back in 2008 and 2011.

img_2460-1

imgres-1

My relationship to this course starts back in 2008 when Chris reached out to me about the possibilities of building a blog aggregator/wiki/forum hub around the videos for the lectures on Volume 1. I blogged at length about the process of setting this all up back in 2008, and it was not all that successful. I got about 50 folks creating blogs, jumping in forums, etc., but it was not all that consistent and the hub I created died a slow death, but the same wasn’t true for Harvey’s site. Chris Caruso, the man behind the curtain of Reading Capital, has done a remarkable job with aggregating Harvey’s work over the last seven years. He ran the project that got both of the Reading Capital courses online and available for free (in a multitude of formats no less). You could argue Caruso’s work to produce Harvey’s course was a prototype of what the xMOOCs would pickup and run with four years later. The difference here was they produced and maintained the content on his own domain, it was done for a fraction of the cost, and they invited a hack like me to try and frame an aggregated blog community. Two out of three successes ain’t bad.

In many ways bringing David Harvey’s site into the Reclaim Hosting fold feels like the convergence of work I’ve been part of since the beginning. It’s a very cool feeling to still feel connected to awesome people and projects like this that truly forward the value of sharing the best of academic thinking without all the corporate/business model nonsense that is increasingly being grafted on top of the idea of the course. The maintenance of Harvey’s site has been fueled by donations over the last seven years, and folks continue to be generous to a fine, independent cause. Reclaim wants to do its small part, and we’ll ensure Harvey’s site and work have a home for as long as Reclaim Hosting is around. And that’s just the beginning, we would love to help out other independent, socially relevant projects that need hosting support if we can. Let us know.

While making certain the site moved over to Reclaim’s Huskerdu server cleanly (Huskerdu meets Marx–groovy) I used lecture 8 from the first course on Volume 1 of Capital to ensure the media moved over cleanly. As the site was coming over I randomly watched this video through. I was really struck by Harvey’s framing Capital as Marx’s theory upon how societies change. He dedicates a significant part of this class session to a footnote in which Marx calls for a “critical history of technology” up against Darwin’s Critical History of Nature, to understand how societies, rather than species, change. As soon as I heard this bit it struck me as one of the undergirding impulses for much of my favorite intellectual work in edtech, honed to an art by folks like Audrey Watters in masterpieces like this on The Learning Channel.

Thanks Chris, we really appreciate you keeping the porch light on for awesome resources like this! You’re making the web a richer library with every downloaded video!

Reclaim Hosting in the Cloud

Reclaim Hosting in the Cloud

This past week marked the 2 year anniversary of Reclaim Hosting and what started as something of an experiment has turned into a successful business and one of the most rewarding things I've been a part of in my professional career. We've come a long way in 2 years but we're learning everyday and one thing on my ever-growing bucket list of bugs and improvements has been our website. Not the design mind you (though I do have lots of plans to give that a makeover), rather the architecture and hosting of it.

When we started the company we had just one lonely server for people's sites and that included our own. Today I manage many servers but in part out of laziness I never did move our site off the same shared hosting server our customers were on. I could justify this for awhile because it's not like we get a ton of traffic (though we're not small), but also I felt at the time it was a good gauge for our service if we were on the same system everyone else was using. Alas, I started to see holes in that methodology when people who had various issues with accessing their account be it firewall issues or otherwise would also no longer be able to get to our site. More and more it seemed like reclaimhosting.com should always be up and always be available regardless of the status of any given server we manage.

Meanwhile for the past year I've dug more and more into Amazon Web Services for server architecture. I helped UMW move UMW Blogs to AWS last summer and in fact the DNS for reclaimhosting.com has been running through Route53 for at least a year now. With a few recent projects being candidates for a multi-layered cloud approach with AWS I thought it was high time we moved the main website for Reclaim Hosting up there as a proof-of-concept and best-practice example of running WordPress in the cloud.

I started with the video below (the audio for it is incredibly quiet but it's worth it) which helped me figure out a few neat tricks for caching, staging environments, and other things I hadn't yet experimented with. The author also shares a lot of information in the notes that point to his GitHub repo for the project.

Today I flipped the DNS and at this point I believe the site is now resolving for most folks on the new setup. Here's a quick laundry list of some of the items at play with our new setup for reclaimhosting.com:

  • A single EC2 server serves as a staging environment that utilizes the same database as production servers.
  • The staging environment changes are committed to a private repo on GitHub (mostly just plugin and theme modifications since the database stores most all content)
  • Our database is using a Multi-AZ (Availability Zone) RDS instance for high availability.
  • We have an Elastic Load Balancer that is receiving the requests to the site and sending them to production servers.
  • OpsWorks is setup to deploy changes from the git repo to 4 production EC2 servers.
  • Each production server is located in a separate datacenter for high availability.
  • All uploads are sent to S3 storage and served on the website using the CloudFront CDN for faster response times globally.
  • A Redis object cache stores objects across all production servers in memory on an ElastiCache instance.

Reclaim Hosting in the Cloud

The new setup also has me digging a bit into Chef which is used in the deployment process by OpsWorks but not something I previously had any experience with. The upgraded setup also comes with a lot of security enhancements. Logins are only available via white-listed IP to our staging environment and wp-login.php is completely removed during deployment so it's virtually impossible for someone to get in to our site. Security groups (essentially firewalls) are configured in a way that each layer only has the necessary ports open to the necessary servers accessing them. For example the database can only be accessed by the running EC2 servers, the production servers will only access ports 80 and 443 (http/https) from the load balancer, and staging is only accessible via SSH with a private key.

I started out with T2-Micro instances across everything (the cheapest Amazon offers), bringing the total cost of a setup like this to around $60/month running 24/7 with 5 servers, a production database, an in-memory cache instance, and ~1GB of data stored in S3 with CDN caching. At the moment performance seems to be pretty great considering the small size of the production servers.

This is no doubt an incredibly complex environment and not something I would recommend anyone tackle for their personal sites, but for large production instances of WordPress used by your campus or sites that absolutely need to be online at all times and highly-available globally? Absolutely! And heck, if you're unsure how you might accomplish the same hire Reclaim Hosting to help you get setup or host it all for you in the cloud!

Discourse(s) on Docker

One of the things Tim built a while ago is a server running multiple instances of the forum software Discourse using Docker. He did this because we’re getting more and more interest at Reclaim Hosting for this forum software. As usual, Tim came up with a pretty slick setup that enables us to provide this fairly easily and cheaply. To be clear, I have not yet gone through the process of setting up the server environment that runs multiple Docker instances of Discourse, and I want to go through that process next. But in the interim, this post will simply go through setting up a new instance of Discourse using Docker in an attempt to beef up our internal documentation.

If you are interested in getting up and running with Discourse in a Docker container, check out Sam Saffron’s excellent overview of Docker, the various issues installing it, and a write-up of his process, I used that posts on several occasions to be refreshed on the commands I needed to bootstrap and start the docker container.

So, after logging into your server via command line (are you still with me?), you would change directories to where the containers setup files for each server are kept:

cd /var/discourse/containers

 

Discourse 1

You can see in the image above I was searching a bit. In the containers directory you have several .yml files for each of the Discourse installs. YAML files are basically a serialized set of instructions for Docker on how to setup the environment. You have to create a new YAML file for each new install, and ten copy another installs config and change a few details.  So, edit another YAML file by typing something like this:

nano nameofanothercontainer.yml

You will then be shown the config file for that container. Copy it and then create a new YAML file like so:

nano bava.yml

Copy the contents from the other container in this new file, and change the following details.

In this bit you include the email, or emails, of the admins:

## TODO: List of comma delimited emails that will be made admin and developer
## on initial signup example 'user1@example.com,user2@example.com'
DISCOURSE_DEVELOPER_EMAILS: 'info@reclaimhosting.com,jimgroom@gmail.com'

In the following area you change the hostname to the domain (or subdomain) you will be installing Discourse. Chances are you will be mapping an A-Record here. More on that later:

## TODO: The domain name this Discourse instance will respond to
DISCOURSE_HOSTNAME: 'discourse.bavatuesdays.com'

In this area you need to setup your mail server info so Discourse can send emails to new users, etc. We use Mandrill for this at Reclaim Hosting, and it seems to work well. This is a bit difference from applications in a LAMP environment which has all this setup for you. With the new fangled Rudy and Node.js apps you’ll find you need a transactional mail service like Mandrill, Mailgun, etc. —which are basically API driven mail services for developers, or so I’ve heard.

## TODO: The mailserver this Discourse instance will use
DISCOURSE_SMTP_ADDRESS: smtp.mandrillapp.com # (mandatory)
DISCOURSE_SMTP_PORT: 587
DISCOURSE_SMTP_USER_NAME:  username
DISCOURSE_SMTP_PASSWORD: yourpasswordhere
#DISCOURSE_SMTP_ENABLE_START_TLS: true # (optional, default true)

Finally, you need to change a few paths to point to your Discourse file. So anywhere you see bava was previously the name of the container’s YAML file I copy and pasted from. For exampled, if I copy and pasted from the reclaim.yml file, everywhere you see bava below would have originally been reclaim

## These containers are stateless, all data is stored in /shared
volumes:
- volume:
    host: /var/discourse/shared/bava
    guest: /shared
- volume:
    host: /var/discourse/shared/bava/log/var-log
    guest: /var/log

Now save the bava.yml file.

After that, we need to edit the discourse settings for Nginx. Notice that Discourse is a Ruby application running on Nginx—two big reasons this application doesn’t run in a LAMP environment. Nginx is a web server, like Apache, but with different requirements than what’s bundled with a LAMP stack. Very few Ruby applications run in a LAMP environment, which means a whole generation of Ruby and Node.js web apps depend on a sysadmin to get running. One of the many reasons to be excited about Docker is that it can potentially make hosting these applications a lot easier. Anyway, we still have to edit Nginx.

Discourse 5

Got to:

cd /etc/nginx/sites-available

From there you need to edit the discourse file:

nano discourse

There will be a series of server settings for each of the discourse containers running. Copy one, and paste it at the end of the file and edit it to work for your container. For example, I replaced the URL I am running my container at (discourse.bavatuesdays.com) as well as putting bava in the proxy_pass file path:

server {

        listen 80;

        # change this

        server_name discourse.bavatuesdays.com;

        client_max_body_size 100M;

        location / {

        proxy_pass http://unix:/var/discourse/shared/bava/nginx.http.sock:;

                proxy_set_header Host $http_host;

                proxy_http_version 1.1;

                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        }

}

Now you need to go to the /var/discourse directory and bootstrap the container and then run it. Below are the commands. When you bootstrap it will take a little while, because that is where an image of your discourse application is being created in the container.

sudo ./launcher bootstrap bava

Discourse 8

If it bootstraps successfully it will tell you as much, and then you can start the application:

sudo ./launcher start bava

If that works, you need to remember to restart Nginx using the command below:

Discourse 11

service nginx restart

We talked about mapping an A-record above, well if you haven’t done that your Discourse application won’t be visible anywhere. So, this might be a good time to go to the DNS Zone editor for the domain you want this to point to and add the IP address of the server to an A-record. It should look something like this:

Discourse 6After that, you should have a brand spanking new discourse application up and running. I still have yet to play with discourse.bavatuesdays.com, so that should be a future post.

Discourse 10

What’s cool is that if I run the docker ps command in terminal, I can see all the containers running discourse.

Screen Shot 2015-08-01 at 10.19.15 PM

If we could automate this process, which I imagine is possible, we could have a server that provides discourse instances for anyone that wants one. Making hosting an application like this relatively easy. And, unlike shared hosting or a multi-site application, the fact that it is in a container means it would not effect any of the others. Trippy and cool.