Whoowns or Whoowens?

One of the cPanel scripts I’ve found really useful as of late is the whoowns script that let’s you know which account owns a specific domain. Let me provide a quick scenario.  You have an issue with a domain and you can’t figure out which account in lives in, which could mean it’s an addon domain that wasn’t registered through us, etc. Tracking it down can be a pain. You can figure out what server it is on by using a command like nslookup (nameserver lookeup) that will tell you the hostname and identify the server:

nslookup themissingdomain.tld

The above command will return something like beathap.reclaimhosting.com.  Which means the account is on the Beathap server, but given it is not the primary domain of an account it is not going to appear in the list of all cPanel account. And this is where I would get stuck.

But using whoowns will tell you the account owner, just log in via terminal and use the following command:

/scripts/whoowns themissingdomain.tld
themissi

That will tell you the account that domain lives in which means problem solved. A simple, useful script.

So, when extolling its virtues in Slack I wrote /scripts/whoowens —and soon after Tim had some fun and wrote his own script. So, when you run /script/whoowens on any of Reclaim’s servers you get the following:

That’s geeky and it’s awesome. Hosting humor #4life.

Sequel Pro’s SQL Inserts

Another tool I’ve  been becoming more familiar with for sites that don’t have phpMyAdmin to access the MySQL databases is Sequel Pro. It’s an open source application for managing SQL databases on the Mac.  I have come to appreciate it in newfound ways after the UNLV migration; it is to databases management what Transmit has been to moving around files via FTP.  Anyway, one think I discovered it can do is copy the structure of a database table, such as wp_users:

And then insert it as SQL code in something like PHPMyAdmin:

Sequel Pro does all SQL query structuring for me, which is awesome. Was a nice little bonus to discover, and another trick for the toolbox.

 

Digital Ocean’s One-Click Apps vs. Cloudron

Digital Ocean has been en fuego as of late. They announced a whole bunch of new droplet plans, and the price-point for all of them has gone down. This is very good news for Reclaim Hosting because it gives us some breathing room with our infrastructure costs allowing us to continue to keep costs low.  We have been slowly moving most of our infrastructure from Linode and ReliableSite to Digital Ocean, and we could not be happier. They are constantly improving their offerings, and being in a virtual environment where we can increase storage or scale CPU instantaneously makes our life (and our clients’) a lot easier.
One-click Apps at Digital Ocean

One-click Apps at Digital Ocean

In addition to new plans and pricing, I noticed they were featuring one-click apps as well (though not sure how new this is), and I took a peak to see what they offered. It was interesting to see that some of the application they featured, namely Discourse (the forum software) and Ghost (the blogging app), were apps Reclaim was offering beyond our shared hosting cPanel-based LAMP stack. Given we’ve been exploring a one-click option with Cloudron (I recently blogged about setting up Ghost using Cloudron) I wanted to compare Digital Ocean’s idea of one-click to Cloudron’s. Long story short, there is no comparison. Here is Digital Ocean’s command line interface for setting up Ghost:

Command line interface during Ghost setup on Digital Ocean’s one-click apps

Here is Cloudron’s:

One-click install of Ghost on Cloudron

Digital Ocean is amazing at what they do, but their idea of one-click installs still assumes a sysadmin level of knowledge, which, to be fair, make sense given they are a service designed for sysadmins. When I tried the Ghost app it was, indeed, installed on a droplet in seconds, but the actual configuration to setup required full-blown tutorial for command line editing the setup. In addition to the domain pointing, this was setting up SSL and Nginx, granted that simply meant typing “yes” or “no” and clicking enter, but even when you did the setup was not guaranteed. After following the tutorial to the letter I still got the Nginx 502 bad gateway error, which means I was stuck.

Ghost 502 Bad Gateway Nginx Error

I could have tried to troubleshoot the 502 error, but at this point it was just a test and from my experience it was far from one-click.

Discourse example

I then tried the Discourse, and this was definitely easier than Ghost. It still required a tutorial, but that was primarily focused on setting up an SMTP account through Mailgun so the application could send email. After that, the setup was simple, but again the one-click setup process on Digital Ocean assumes an understanding of API-driven transactional email services like Mailgun or Sparkpost. Cloudron does not have a Discourse installer, so no real comparison there, but if it could manage the SMTP email setup in the background, I imagine it would be just as simple as their Ghost installer. I’m glad I explored Digital Ocean’s one-click application offerings because it confirms for me the potential power of tools like Cloudron that truly make it simple to install applications. Our community by and large will not be folks with sysadmin level knowledge, so integrating a solution that is truly one-click, avoiding DNS and command line editing,  would be essential.

NCDU-fu

Sometimes it feels to have some meager sysadmin competencies, such as knowing how to quickly identity where large files are in a particular hosting account. This issue comes up from time-to-time when someone discovers all their storage space has been eaten up, but they are not sure where and why.  Often this is a symptom of a larger problem, such as an error_log run out of control which suggests bad process for a particular application, etc. That was the case on a ticket this morning, and luckily I knew the NCDU command. What is NCDU, you ask?

Ncdu is a disk usage analyzer with an ncurses interface. It is designed to find space hogs on a remote server where you don’t have an entire graphical setup available, but it is a useful tool even on regular desktop systems. Ncdu aims to be fast, simple and easy to use, and should be able to run in any minimal POSIX-like environment with ncurses installed.

In other words, a script for a remote server that finds big files. You can install it on your server, and then run it by navigating in command line to the offending account, which on our cPanel servers would live at /home/offendingaccount and running the command NCDU. After that, it will list all the directories and their sizes, followed by files.  You can then locate the directory with the largest file usage, and then change to that directory and run NCDU again until you find the offending file. In the example this morning, it was a 6 GB error_log in a directory running WordPress, easy fix to clean out space, and a good heads up things in that account need to be checked for a bad plugins, theme, etc.

The life of a Reclaimer is always intense

Set UMW Blogs Right to Ludicrous Speed

UMW Blogs was feeling its age this Spring (it officially turned ten this month—crazy!) and we got a few reports from the folks at DTLT that performance was increasingly becoming an issue. Since 2014 the site had been hosted on AWS (Tim wrote-up the details of that move here) with a distributed setup storing uploads on S3, the databases on RDS, and running core files through an EC2 instance. That was a huge jump for us back then into the Cloud, and the performance jump was significant. AWS has proven quite stable over the last two years, but it’s never been cheap—running UMW Blogs back then cost $500+ a month, and I’m sure the prices haven’t dropped significantly since.

Continue reading “Set UMW Blogs Right to Ludicrous Speed”

A WHMCS Invalid Token Error and the glory of blogging

I woke up this morning to find that our WHMCS portal for Reclaim Hosting was having some issues. WHMCS is software that enables you to manage the business of cPanel, effectively provisioning, invoicing, billing, renewing, etc. without it people can’t sign-up for new accounts, pay their bill, or access their client area. They can still access their sites through theirdomain.com/cpanel, but they would need to use their SFTP credentials to login their, so it would get bad quick support wise. So, when I discovered the 503 Service Unavailable error I knew I needed to fix this immediately. It happened at both a good and bad time. Good because it was late night in North America, so the demand was not peak. bad because my Reclaim partner Tim Owens was fast asleep ? But, in fact, that might have also been good because I tend to lean on him for this stuff given I’m afraid to mess shit up.

Continue reading “A WHMCS Invalid Token Error and the glory of blogging”

Changing Storage Quota for cPanel Accounts

This is a quick and easy tutorial for changing storage space quotas on specific cPanel accounts, perfect for a rainy Sunday morning. I often get this question from someone managing a Domain of One’s Own initiative that needs to modify an account to allow for more storage space.

This process is done in WHM, which is basically the GUI interface for managing all the accounts on cPanel. Once logged in you do a quick find using the word “list” (no quotes) in the left upper hand corner. Then click “List Accounts” which will allow you to search for the account you need. You can search by the username or domain as demonstrated below.
Continue reading “Changing Storage Quota for cPanel Accounts”

“Anything is Possible in Linux”

I was doing a major migration of various sites for Gary Stanton, a Historic Preservation professor I worked with on and off for a decade at the University of Mary Washington. About the same time I was leaving he was retiring, and we had worked together on a ton of WordPress sites. He is a folklorist by training, and he has an unbelievably eclectic interests in all sorts of awesome vernacular American culture. When he creates class sites, they usually weigh-in by the gigabyte given how many audio files, images, and documents he shares with his students. He’s been building sites like that for years, and when he asked if he could move his stuff to Reclaim Hosting after retirement I jumped at the chance. He has so much cool stuff to share, he’s one of those folks that makes the web a better pace by populating it with his closet of curiosities. And to think Reclaim can help make sure it’s online and stays around for the long haul is an honor and a privilege. Continue reading "“Anything is Possible in Linux”"

Installing and Customizing a Scalable WordPress Multisite with Linode’s StackScripts

I’ve been on a server admin crash course over the last 8 months or so, and I’ve been thoroughly enjoying myself. I have been fortunate to have the most patient and generous teacher I’ve ever studied under: the great Tim Owens. I truly have a deep respect for how much he has taught himself over the last 4 years, and trying to catch up with him gives me an even deeper appreciation of his mad skills. One of the turning points for Reclaim Hosting this semester has been taking on large-scale WordPress Multisite instances for institutions. We jumped in with both feet when we took over the hosting of VCU’s Ram Pages—a beast I have written about recently. Tim did a brilliant job scaling this extremely resource intensive WordPress Multisite, and I was eager to try my hand at the setup. Luckily Reclaim has no shortage of opportunities, and recently the University of North Carolina, Asheville was interested in experimenting with a pilot of WordPress Multisite, so I got my chance to work through the setup with a brand new install. Continue reading "Installing and Customizing a Scalable WordPress Multisite with Linode’s StackScripts"

Dr. Reclaimlove or: How I learned to Stop Worrying and Love Devops.

One of the best things (besides the /giphy function in Slack) about getting some time each month to work for Reclaim Hosting is how it has put tasks at my “traditional” full-time IT job into perspective; contrasting my full-time IT environment, which is pretty old fashioned (physical stuff), with an environment that relies heavily on Devops, virtual IT, and cloud administration. Fundamentally, Reclaim is really a model example of how to effectively run a lean startup, manage virtual IT, and stay mostly hands off, and it’s been a good introspective experiment for someone like myself who still grips precariously upon the edge of physical infrastructure and an old-school IT background.

Devops is kind of a contentious term for many “traditional” (read: mostly this means hands-on) IT people, because it represents a massive shift in the way IT work is defined and performed. Since people, especially IT people, are often prone to some degree of change-averseness (guilty) and paranoia (doubly guilty) about their precious hardware racks, “if my infrastructure goes away then my job will go away” is not an entirely unreasonable conclusion to arrive at. We are looking at a fairly unprecedented degree of change in our industry and at blazing speed. Our jobs, though not the same as they were 10 or 15 years ago, did not start becoming substantially different until about 3-4 years ago. Neckbeard the Elder would probably be a high performer at most existing IT generalist jobs in 2013 and 2014…maybe 2015, too. The next generation of IT generalists (and IT generalists will still exist) will not rely upon Neckbeard the Not-Quite-Elder (that’s us) unless we decide, right now, to acknowledge that these changes are happening and that we will perish if we don’t adapt.

So how do we embrace the changes if we’re in traditional IT and not Devops IT? First, we have to acknowledge what Devops actually means…and since the “real” definition of Devops is still up for some discussion, let’s try to define it in the context of traditional IT work:

Devops is a collection of hands-off methodologies designed to reduce the need for physical infrastructure in favor of virtual, managed infrastructure over a hosted medium.

I.E., “use the cloud, and write some scripts.” This is in comparison to the Wikipedia definition of Devops:

DevOps is a software development method that stresses communication, collaboration, integration, automation, and measurement of cooperation between software developers and other information-technology (IT) professionals.”

“Woah woah woah. I’m in IT. I’m not a software developer. I don’t want to have to deal with them.”

This sort of makes it sound like you have to be a software developer in order to be successful in IT, which is not entirely true, but I will stress this: if you want to be successful in IT in 2015+, you need to know something about how to code. Your code doesn’t have to be flashy, and you don’t need to be an expert, but it should be effective and reasonably efficient. And in “code” I recommend learning Bash, Python, or Powershell (if you are in a Windows-heavy environment). I dabble in all three of these languages, and though I am not terribly good, I understand some of the thought processes that developers go through when iterating on their previous code and it helps me “get into the head” of a developer a bit. It’s also a huge opportunity for me, and it can be for you too.

If you’re like me, you are an overworked, overstressed IT admin. I have began to embrace Devops because it gives me an avenue for working less…if I commit to the avenue. Basically, I don’t really want to do any work. I would rather be doing other things that are more fun, but I still need to have a job so I can do those other fun things. Some of those fun things are actually “work” but they’re not really “work” to me but I still get paid to do them? Anyway, I can turn this into a win-win using Devops, and I am actually going to use a very Windows-centric example because it’s the easiest to understand, and because most software developers I know do not use Windows and I am trying to keep something of a line there.

Setting up Active Directory and doing it well is difficult. Setting up AD well using Microsoft Azure is less difficult because I don’t have to worry about unscalable and unstable hardware, vendors, CALs, incredibly obtuse licensing, etc. So what is the opportunity? Setting up AD (in the cloud), integrating it with VMM (in the cloud), and creating a “developer hook” (could be as simple as a batch file run from the requestor’s desktop) so developers in the correct AD groups can request the creation of dev and staging machines (and having those machines created for them) without me having to really touch AD anymore, except also “literally” touch it because that hardware doesn’t exist in my universe. There is nothing for developers to break because if the OS gets completely destroyed somehow they can just request a new virtual machine. Microsoft does a lot of the “devops” work for you in this example because of their integration tools, but you could also Powershell a lot of that work away, intelligently, and then maybe you could even have a real lunch break! By the way, the work you do linking Azure (or AWS, or Google) VPN to your network? That will not be work software developers are going to be doing in the foreseeable future.

I am not some Microsoft fanboi, but this is a simple example of how “thinking” in a Devops way can be hugely beneficial and not so scary at all because it illustrates the need for traditional IT expertise with the development and automation expertise. Few of these opportunities existed even 5 years ago. (OK, so that is a little scary.) If you are reading this blog, you might be thinking “well, he’s preaching to the choir a bit,” but I promise you, based on the things that I have seen and heard, I’m not. If you’re not convinced, look around at what some IT specialists are doing on LinkedIn. IT needs to “get real” on these things soon, and start getting their people trained to think in a way that fosters collaboration and automation.

Instead of being wary of Devops, make it work for you, as we are doing at Reclaim. I’m currently deploying a network and server monitoring solution in the cloud for Reclaim (also a “traditional IT” task) and am creating an opportunity for myself to script or program away the SNMP configuration of hosts I’d like to add to the monitoring solution and making it as close to “zero-touch” as possible. In a more advanced environment, you could do something like this using Chef or Puppet, but for this task in particular, I don’t even really need that solution. I am greatly expanding on my Bash/Shell skills (Dev) while incorporating my security, file transmission, service configuration, and permissions skills (Ops). When the prep work is done, the operator will be able to go through a simple series of conditionals that will copy the SNMP config file over to the machines to be monitored, with no additional input needed from “the IT guy.” This is not scary. Self service is good service. Eventually we may even get to the point of auto-discovery. But that’s TNG, we’re still in Star Trek. 🙂

Jim and Tim have built the powerful engine of Reclaim Hosting using simple, powerful DevOps methodologies and thought processes. In doing so, they can focus on the customer and not let the hardware get in their way, and that is the essence of an effective business. “Think Devops” can be the essence of your IT infrastructure, support organization, and even your sanity, if you learn, as I have, how to embrace these changes and live by this mantra. If you can’t commit yet, just start with “think automation.”

I will post more, in the coming weeks, about the monitoring platform, what we are doing, and I will also post some sample config files either here or on Github that you can port right into your own Linux machines if you’d like to start experimenting with the SNMP daemon. Until then, happy reclaiming!