Duke’s Website has Gone Docker

I was excited to see Tony Hirst retweet the news that Duke University’s website is being run in a Docker environment, and it could even be served through Amazon Web Services. Chris Collins, senior Linux admin at Duke, wrote about “Using Docker and AWS to Survive and Outage” they had as a result of DDoS attacks on their main site back in January. I love the way he tells the story:

While folks were bouncing ideas around on how to bring the site up again while still struggling with the outage, I mentioned that I could pretty quickly migrate the site over to Amazon Web Services and run it in Docker containers there. The higher-ups gave me the go-ahead and a credit card (very important, heh) and told me to get it setup.  The idea was to have it there so we could fail over to the cloud if we were unable to resolve the outage in a reasonable time.

TL;DR – I did, it was easy, and we failed over all external traffic to the cloud. Details below.

He goes on to describe his process in some detail, and it struck me how the shift in IT infrastructure is moving, and also made me wonder how many IT organizations in higher ed are truly rethinking their architecture along these lines. It’s one thing to push your services to a third party vendor that hosts all your stuff, it’s all together different to bring in a team that understands and is prepared to move a university’s infrastructure into a container-based model that can be hosted in the cloud. Not to mention what this might soon mean for personal options, and a robust menu teaching and learning applications heretofore unimaginable. This would make the LAMP environment options Domain of One’s Own offers look like Chucky from Child’s Play Duke’s Website has Gone Docker

I know Tim and I are looking forward to thinking about what such a container-based architecture might means for an educational hosting environment that is simple, personalized, and expansive. Tim turned me on to Tutum recently, which starts to get at the idea of a personalized cloud across various providers—something Tim Klapdor gets at brilliantly:

MYOS is very much the model the Jon Udell laid out as “hosted life bits” – a number of interconnected services that provide specific functionality, access and affordances across a variety of contexts. Each fits together in a way that allows data to be controlled, managed, connected, shared, published and syndicated. The idea isn’t new, Jon wrote about life bits in 2007, but I think the technology has finally caught up to the idea and it’s now possible to make this a reality in very practical way.

His post on the topic deserves a close reading, and it’s the best conceptual mapping of what we might build I have read yet. I wanna help realize this vision, and I guess I am writing about Duke University’s move to Docker because it suggests this is the route Higher Ed IT will be moving towards anyway (sooner or later—which could be a long later for some Duke’s Website has Gone Docker ). Seems we might have an opportunity to inform what it might look like for teaching and learning from the ground floor. It’s not a given it will be better, that will depend upon us imagining what exactly a teaching and learning infrastructure might look like. Tim Klapdor has provided one of the most compelling visions to date, building on Jone Udell’s thinking, but that’s just the beginning.

Dockers

s02e09_container 01

The above GIF is from an episode of The Wire during Season 2. The docks are ubiquitous in season 2, and this particular image is a visualization of a cloned machine that captures the vanishing container—presumably filled with illegal cargo. I’m fascinated by the representation of technology throughout the series, but season 2 in particular is really interesting. There’s the highlighting of a cultural move to digital cameras, the increasingly popularity of the web, GPS, and much more that’s constantly being discussed, but there’s also the radical changes to the physical technology of the dock. The first part of the following video features the presentation from Season 2, Episode 7 about the automation of the port of Rotterdam.

Frank Sobotka refers to this as a “horror movie” noting the eroding need of stevedores, but more generally labor. The automated container technology becomes a sign of labor’s vanishing past.

containers

At the same time the container systems that have redefined the way shipping works have metaphorically come to servers thanks to Docker.

Docker

To the degree I fully understand it, Docker provides an open platform for building, running, and shipping distributed applications. In other words, you can get a pre-configured container through Docker that has the proper server environment for running a specific application. For example, if you want to run the the forum software Discourse or the blog engine Ghost (which is what Tim Owens has figured out recently for Reclaim Hosting), we have a server that has the docker engine installed which allows us to quickly fire up different application environments and run them for anyone who requests it.

Docker

And we are grabbing those application images from an open repository of virtualized possibilities that helps us avoid become overly dependent on a closed platform like Amazon Web Services, which is a major bonus. Additionally, Tim is playing with Shipyard, which allows you to manage various containers and resources on your server. What strikes me about all of this is how the metaphorical language of docks, shipyards, and containers helps me wrap my head around this technology. What’s more, it’s cool to see it both through the eyes of Frank Sobotka and Tim Owens—two of my heroes :)

css.php