This post is brought to you by Command Line Heroes, an original podcast from Red Hat.

My journey, as one might say, in search of the Holy Grail or the great unicorn called DevOps, began well over 20 years ago; yet I never knew it at the time… Actually, it began in 1984 when I was 13 and got my own first computer, a Commodore VIC-20. It wasn’t the first computer that I had ever used but it was mine. I pushed that system with 3 ½ KB of RAM to its limits. Technology has grown by leaps and bounds in such a short timespan. Since those days, I’ve worn many hats. I’ve owned my own company, I’ve helped tech-edit books, and I’ve been pretty active in the open source community. Workshops, social media, MeetUps, Red Hat user groups (RHUGs), virtualization technology user groups (VTUGs), etc. have all allowed me to share and learn at the same time.

Fast forward 10 years, I’m a sergeant in the Army. Sometimes, when in garrison (which was rare), I got tasked out to a bit of side work on UNIX systems simply because nobody knew how to use them. These puppies were coupled together with many pre-internet technologies.

My life forever changed

In 1997, a few years after I’d finished my time in service, I was working on a Unix team at a company that serviced student loans and got introduced to something that forever changed my life—Red Hat® Linux®.

It wasn’t until I was in a small startup called RagingBull.com in 1999 that I saw anything remotely close to a tightly knit set of processes around constant delivery with daily code updates to production. The lessons I learned there have remained with me since.

We were the cutting edge of tech—running Red Hat Linux version 6.0 across 90% of our production. This was still back when most hadn’t even heard of Linux, forget the concept of basing a whole business on it. I was the keeper of the customized kernel that allowed us to run our entire site in a ramdisk. We worked crazy long hours and when not working, we still spent most of our time together—it was like a surrogate family for me.

Before my time at Red Hat, RagingBull was my mental bar for an ops team, how a company should operate, and not least how professionals should work together. Culture. Yes, culture... Not just process. Yet I would be remiss not mentioning exactly how engrossed in enterprise processes I became at RagingBull. I had great mentors, combined with an open culture of responsibility, accountability, and sharing of knowledge all combined to create my core of operational skills.

Developers, support staff, QA, Linux administrators, project managers, and (of course) IT management all attended our daily meetings: Daily review (i.e., basically what we called our daily standup), code review meetings, and the ever important rollout meetings. We all worked together. Each member of the extended team had the ability to give a deployment the thumbs down—halting it for cause. Granted, that rarely got used. But in fact it was a few times, and for that we were all glad.

We did daily code and content pushes to production. Yes, daily. Sometimes more than one release in a day… Now, fully accepting that back in 1999 IT tooling and process was nowhere near as evolved as they are today, we were rockin’ it at the “bleeding edge.”

A rollout back in the day

So, allow me to describe a rollout back then. Our developers programmed the updates and worked with a release manager to create a build, and document it, along with each rollout or its rollback. The QA team ran it through some tests (even some automated ones). Upon passing all tests, the code release got reviewed by the previously mentioned cast of characters. Then, if it survived the code review and rollout meetings, it was officially handed off to the Linux admins to roll out that night. From roughly 5 - 9 p.m., it was downtime in the place with dinner brought in nightly. At 9, the team of Linux admins executed the previously documented rollouts, oftentimes by scripts written in the downtime hours. We had to wait until after 9 to execute when all of the U.S. stock exchanges were closed.

By now you’re probably thinking, “Yeah, yeah. That’s great and all, but it’s not DevOps.” Correct. Absolutely. I wanted to share a story familiar to most about production code releases. We automated what we could back then, but truly there was a significant amount of manual heavy lifting. Others simply called it pain. Most still do today.

Based on many years of past experiences—both good and bad—I personally have a vast appreciation for the evolution of tooling and process and how they continue to improve today.

DevOps and CI/CD

DevOps and continuous integration and continuous delivery (CI/CD) come up in conversations daily now with customers, partners, and community event attendees. I’ll share with you what I tell them when I’m asked about it. DevOps (or CI/CD) is a journey. The path is varied for all that partake. The tools and processes adopted are just as varied. No two that I’ve worked with or have seen do it the same.

Yet, the Wikipedia definition of DevOps does imply a few things:

  1. It’s end-to-end automation. You can choose a single tool or multiple tools, but the entire process needs to be automated.
  2. It implies that there’s software release tooling and process. Defined versioning is important to determining which code goes where.
  3. It implies that release candidate code or content passes thru a level of automated testing. Quality of releases cannot be abandoned when quantity and frequency of releases increases. It matters more.

Placing those 3 tenets together—selecting tooling, defining platforms, and adding in whatever considerations your own company insists on—you can visualize just how complex the automation needs to be to establish a minimum viable product (MVP) to create your own CI/CD pipeline.

A simple (yet ever-expanding, and possibly incomplete) checklist includes tools for:

  • Developers
  • Source code management (SCM)
  • Backlog and bug-tracking
  • Pipeline management
  • Cloud and platform management
  • Target platforms for deployments
  • Security testing
  • Code and content testing
  • Documentation
  • And of course, automation!

Whether there are 50 steps, or only 5, for a piece of code to reach production, if the entire workflow isn’t automated—is it really DevOps? Many struggle with taking the manual human element out releasing to production. I’ve seen things fail even when only minimal human intervention was involved. Automation is the solution. The better the automation and testing, the better the safety net.

Only you can define what DevOps means to you

Only you can decide if DevOps is right for your products, your company, and your corporate culture.

Even if you’ve automated everything under the sun, can you say that you’re “done?” Since CI/CD involves a continuous improvement mentality—can we ever truly be “done” or is such perfection only a myth?

You’ll never know until you take your own journey. May it be a happy one.

 

Want to hear more stories about the OS?

Check out @rossturk’s post on the Magic of Linux or @ThomasDCameron’s post From police officer to Open Source devotee: One man’s story or @ghaff's story entitled My journey from BASIC to Linux.

Subscribe to Command Line Heroes, an original podcast by Red Hat and be sure to follow Matt on Twitter at @MattTheITGuru, as well as on his tech blog.