How to Install SQL Server 2017 on Red Hat Enterprise Linux with Ansible

Microsoft loves Linux.  But this wasn’t always the case as host Saron Yitbarek will share in episodes 1 and 2 of Command Line Heroes, an original podcast from Red Hat airing January 16th, about the OS wars for the desktop and then the datacenter.

Yet, today, here we are talking about Microsoft’s embrace of Linux.  Nothing showcases this new approach forward than the growing relationship between Microsoft and Red Hat. In this post we’re going to explore one aspect of that relationship–Microsoft SQL Server 2017 on Red Hat® Enterprise Linux®, Microsoft’s reference Linux platform. By using Ansible playbooks and roles to quickly deploy SQL Server, we get to take the best of these tools for a spin.

What is Ansible

Ansible is a simple, powerful, and agentless automation language that allows us to perform a task–like installing, configuring, and dropping in a database schema–within a matter of minutes. This Red Hat-sponsored community project is particularly powerful because of the strong support from our community and interest from our vendor partners, making it easy for teams of people like Linux and database admins to work together and get systems up quickly and ready for production.

If you’ve never used Ansible before, head to our Getting Started page or check out one of our free workshops. It’s easy to get started and learn how to start automating.

To make it easy for people to automate, we also host Ansible Galaxy, which provides easy access to roles—a more complex way to organize and structure your Ansible playbooks.

This playbook uses some common Ansible modules for configuring the base operating system, and then community modules for interacting with the MsSQL deployment after initialization.

Continue reading “How to Install SQL Server 2017 on Red Hat Enterprise Linux with Ansible”

From police officer to Open Source devotee: One man’s story

CLH-MonitorGlow-HorizontalThis post is brought to you by Command Line Heroes, an original podcast from Red Hat.

I guess you could call me an “accidental technologist.” Growing up, I never intended to work with computers. When I was younger, I actually only tinkered with PCs at home or at friends’ houses because you had to learn how to edit the config files on DOS to free up enough memory so that games could have 512k memory to run. You had to understand what a device driver was, and how to install it and add it to the CONFIG.SYS or AUTOEXEC.BAT files so that you could have a working mouse on the console. I also learned about interrupts, IO addresses, and DMA channels from configuring things like ISA SoundBlaster cards with jumpers and DIP switches. Without meaning to, or even realizing it, I had a pretty decent understanding of personal computers. I was not a programmer, not by a long stretch, but I could get computers working pretty quickly. And I loved automating things in batch files. The power of scripting was clear.

When I was younger, I never wanted to be anything except a police officer. So when I was 16, I became a uniformed volunteer through the Police Explorer program at my local department. When I was 18, I became a jail officer. When I was 21, I graduated #1 from the Travis County Sheriff’s Academy, and became a patrolman in a small town in central Texas. When I was 24, after 8 years of non-sworn and sworn law enforcement work, I realized that working nights for so little pay that I qualified for welfare was not the smartest idea. I decided to go to college and change careers. Since I had enjoyed tinkering with computers, I figured I’d learn more about them. I settled on getting a degree in computer science.

But I had to have a source of income while I went to school, so I got a job as a technical writer with a small manufacturing company. Because I’d spent so many years writing reports which enumerated the chronology of events and the elements of an offense, I discovered that I was pretty good at documenting and even improving the processes used to manufacture telecommunications gear. And I got the reputation as the guy to go to when the computers were acting up. We had a sysadmin, but he covered multiple locations, and I only knew him very peripherally until the day he turned in his notice. His three day notice. His angry, three day notice. The plant owner looked at me and said “Thomas, you’re a computer guy. Shadow this guy for three days, learn everything he does, and you’re going to run the computers from now on.” Turns out, the sysadmin was a Certified Novell Engineer with years and years of experience. And very little motivation to actually teach me anything. Did I mention that he was leaving? And angry?

Continue reading “From police officer to Open Source devotee: One man’s story”

(Linux and) The enduring magic of Unix

CLH-MonitorGlow-HorizontalThis post is brought to you by Command Line Heroes, an original podcast from Red Hat.

In the summer of my 14th year, I needed a new computer.

Kids today need things all the time, but I really needed a new computer. Why? Because the PC clone I shared with my dad had a 286 processor and Linux® required at least a 386. I tried the Slackware boot disk one of my dad’s colleagues gave me anyway, but the screen would display “LI” and then freeze, two letters shy of the “LILO” it would print when the bootloader was successful.

The kernel just didn’t know what to do with my antiquated processor. I spent a lot of time looking at a frozen screen with nothing but “LI_” on it.

So I made a deal with my dad (I earn half, you pay half) and got a McJob. Three months later, I had saved $250–half the amount needed to buy a used 386dx40 from some random computer store in the classified section of the Rocky Mountain News. I gave up my summer for Linux. Ok, so I wasn’t going to be outside playing football or something, but it was still summer and I spent it filling sodas. Why did I care about Linux that much?

Let’s turn the clock back a little. A year before that, I got a 2400bps modem for my birthday. Dialing into small BBS systems was a unique thrill. The local BBS community showed traces of life everywhere – software, message boards, welcome screens – but never any actual people. It was exciting and unfulfilling at the same time, and there was only so much I could explore without running up the phone bill. After a season or two I had pretty much exhausted the entertainment potential of the reachable BBS universe.

Continue reading “(Linux and) The enduring magic of Unix”

Container Images and Hosts: Selecting the Right Components

We’ve published a new guide to help you select the right container hosts and images for you container workloads – whether it’s a single container running on a single host, or thousands of workloads running in a Kubernetes/OpenShift environment. Why? Because people don’t know what they don’t know and we are here to help.

Like “The Cloud” before it, a lot of promises are being made about what capabilities containers might deliver – does anybody remember the promises of cloud bursting? No, not that cloud bursting, this cloud bursting 🙂 Once the dust settles from the hype around a new technology, people learn how to leverage it, while still applying much of their current knowledge. Containers are no different. While they do enable a great deal of portability, they are not magic. They do not guarantee that your application is completely portable through time and space. This is especially true as the supported number of different workloads on Kubernetes expands, the Kubernetes clusters grow larger, and the cluster nodes become more differentiated with specialized hardware. There will be an ever expanding role for the Linux & Kubernetes to tie these disparate technologies together in a consumable way.

Building rock solid Kubernetes clusters starts with a solid foundation – selecting the right container hosts and container images. When selecting these components, architects are making a big decision about lifecycle, and the future supportability of their Kubernetes clusters. The supportability of the underlying cluster nodes doesn’t change in a containerized environment – administrators still need to think about configuration management, patching, lifecycle, security. They also need to think about compatibility with the all of the different container images which will run in the environment. Not just simple web servers, but all of the workloads which will run, ranging from HPC, big data, and DNS, to databases, a wide range of 3rd party applications, and even administrative workloads for troubleshooting the containerized clusters (aka distributed systems). All of these different types of applications are moving to Kubernetes. Workloads drive the need for libraries, language runtimes, and compilers. Sound familiar? Most of these needs are delivered by Linux distributions, like Red Hat Enterprise Linux.

We’ve published a new guide to help you leverage the architectural knowledge you have and apply it as you are building your Kubernetes/OpenShift environment. If you have questions, please post them below, and we will be happy to help guide you:

https://www.redhat.com/en/resources/container-image-host-guide-technology-detail

Red Hat Virtualization 4.2 Beta is Live!

We are pleased to announce the beta availability of Red Hat Virtualization 4.2, the latest version of our Red Hat Virtualization platform. Sixteen months into its lifecycle, Red Hat Virtualization continues to provide enterprises with a rich and stable foundation for both existing applications and a new generation of workloads and solutions.

The beta release of Red Hat Virtualization 4.2 includes a number of new and updated features to help organizations streamline and automate operations, improve the virtualization administrator experience, and mitigate risk in the environment.

While there are numerous new features and bug fixes, there is not enough room to list them all here. However, I would like to highlight a few of the additions that make the RHV 4.2 beta remarkable. Some of the new features that you should look forward to include:

Updated User Interface (UI) – When RHV 4.0 was released in August of 2016, it showcased the new dashboard tab as not only a new way of viewing essential resource utilization within RHV but how virtualization administrators will interact with RHV  in the future. The RHV 4.2 beta has made significant strides in furthering those UI updates.

Disaster Recovery (DR) – This is a native site-to-site failover solution. Instead of an integration with a specific storage vendor, it depends on storage at both sites that can be replicated reliably and consistently. Under the covers, Ansible is used extensively to automate the failover and failback process.

Software Defined Networking (SDN) – Open Virtual Network (OVN) has been integrated with Red Hat Virtualization to deliver a native SDN solution, via Open vSwitch. It provides automated management of network infrastructure, a Neutron compatible API for external network providers, as well as network self-service for users, freeing up network administrators from infrastructure requests.

Metrics and Logging – The new metrics and logging solution is built around the Elasticsearch, Fluentd, and Kibana (EFK) stack; the same stack as used by Red Hat OpenShift Container Platform. The new metrics store provides much more functionality and details on the RHV environment than what was previously available.

High Performance Virtual Machine (VM) – The RHV 4.2 beta release provides a new virtual machine type called High Performance when configuring VMs. It is capable of running a VM with the highest possible performance; as close to bare metal as possible. This greatly streamlines the process of configuring the characteristics of a virtual machine over the previous manual only methods.

Support for Ceph via iSCSI – The Ceph iSCSI target has been tested and certified as a storage domain for virtual machines. This provides more infrastructure and deployment choices for engineers and architects.

Cisco ACI Integration – Cisco ACI provides overlay protocols that support both physical and virtual hosts in the same logical network even while running Layer 3 routing. This integration provides additional options for customers, especially those that utilize Cisco ACI as part of their infrastructure.

Many thanks to the engineers, product managers, project managers, writers, and everyone else that contributed to the delivery of this release!

For additional information on the Red Hat Virtualization 4.2 beta release, see the following links:

Hope this helps,

Captain KVM