Rancher: Part 0 - Introduction to Rancher

Welcome to part 0 of a multi-part series where I will walk through the setup and configuration of Rancher. For people unfamiliar, Rancher is an open-source platform for running containerized workloads.From their website:

Rancher is a complete, open source platform for deploying and managing containers in production. It includes commercially-supported distributions of Kubernetes, Mesos, and Docker Swarm, making it easy to run containerized applications on any infrastructure.

This multi-part series will follow my exploits in converting many services running on bare-metal hardware into Docker containers running within Rancher. Future parts will include:

  • Installation and Configuration
  • Default Catalog Overview
  • Creating a Private Catalog
  • Monitoring the System
  • More?

Problem Definition

I have a small home lab. As part of that, I run a three node vSphere cluster and a couple of bare-metal systems. On one of these bare-metal systems, I have a bunch of services running: Jenkins, Tiny Tiny RSS, FlexGet, SABnzbd, CouchPotato, SiCKRAGE, and a few others. While this works and I have been doing it a while, maintaining this system is getting harder and harder. It's difficult to try out something new as I need to remember what depends on what, where data files are stored, what versions are running, how to setup and install dependencies, etc.

As part of my day job, I deal with dockerized workloads and managing them. Running micro-services inside of cantainers makes things much simpler. Dependencies of one service do not impact others. It's really easy to see what is running and what version it is.

I first learned of Rancher at a conference in early 2016. I took a look at it back then and it was 'ok'. My biggest concern was how to handle the storage dependencies of all these containers. Running a single instance of something was fairly easy; either store files locally or store them on a CIFS/NFS share (making sure to back them up). If I wanted to run several instances of a container, for a highly-available solution or for scale-out, it got much harder. This is not a problem unique to Rancher, it is one of the unsettled frontiers of running containers in production. A lot has happened since then. Rancher now has some services to help in this area. Two of them caught my eye: Rancher EFS and Rancher NFS. One of them is a volume driver for Amazons EFS service and the other is a volume driver for generic NFS. I thought I'd play around with Rancher NFS and see if it helped me manage the storage concerns I had.

Over the next several weeks, I will be attempting to replace my bare-metal server which runs all this crud with a series of Virtual Machines configured with Rancher and running containers. Why Virtual Machines instead of running Rancher on bare-metal? Simple, it's really easy for me to scrap a VM and start over.

Introduction to Rancher

First, check out the Rancher Website...

I can't describe Rancher any better than the experts can. If you are at all interested in running containers in production, I urge you to look at Rancher. They natively support deploying Kubernetes, Mesos, and Swarm clusters along with their own Cattle clusters. They support application catalogs, both public and private, for ease of deploying services. Scale services up or down with a couple of clicks. The claim is that they make running containerized workload easy and product capable. We will put that to the test over the next few weeks.

Next Up...

Rancher Installation and Configuration