Microsoft loves Linux. But this wasn’t always the case as host Saron Yitbarek will share in episodes 1 and 2 of Command Line Heroes, an original podcast from Red Hat airing January 16th, about the OS wars for the desktop and then the datacenter.
Yet, today, here we are talking about Microsoft’s embrace of Linux. Nothing showcases this new approach forward than the growing relationship between Microsoft and Red Hat. In this post we’re going to explore one aspect of that relationship–Microsoft SQL Server 2017 on Red Hat® Enterprise Linux®, Microsoft’s reference Linux platform. By using Ansible playbooks and roles to quickly deploy SQL Server, we get to take the best of these tools for a spin.
What is Ansible
Ansible is a simple, powerful, and agentless automation language that allows us to perform a task–like installing, configuring, and dropping in a database schema–within a matter of minutes. This Red Hat-sponsored community project is particularly powerful because of the strong support from our community and interest from our vendor partners, making it easy for teams of people like Linux and database admins to work together and get systems up quickly and ready for production.
If you’ve never used Ansible before, head to our Getting Started page or check out one of our free workshops. It’s easy to get started and learn how to start automating.
To make it easy for people to automate, we also host Ansible Galaxy, which provides easy access to roles—a more complex way to organize and structure your Ansible playbooks.
This playbook uses some common Ansible modules for configuring the base operating system, and then community modules for interacting with the MsSQL deployment after initialization.
Continue reading “How to Install SQL Server 2017 on Red Hat Enterprise Linux with Ansible”
We’ve published a new guide to help you select the right container hosts and images for you container workloads – whether it’s a single container running on a single host, or thousands of workloads running in a Kubernetes/OpenShift environment. Why? Because people don’t know what they don’t know and we are here to help.
Like “The Cloud” before it, a lot of promises are being made about what capabilities containers might deliver – does anybody remember the promises of cloud bursting? No, not that cloud bursting, this cloud bursting 🙂 Once the dust settles from the hype around a new technology, people learn how to leverage it, while still applying much of their current knowledge. Containers are no different. While they do enable a great deal of portability, they are not magic. They do not guarantee that your application is completely portable through time and space. This is especially true as the supported number of different workloads on Kubernetes expands, the Kubernetes clusters grow larger, and the cluster nodes become more differentiated with specialized hardware. There will be an ever expanding role for the Linux & Kubernetes to tie these disparate technologies together in a consumable way.
Building rock solid Kubernetes clusters starts with a solid foundation – selecting the right container hosts and container images. When selecting these components, architects are making a big decision about lifecycle, and the future supportability of their Kubernetes clusters. The supportability of the underlying cluster nodes doesn’t change in a containerized environment – administrators still need to think about configuration management, patching, lifecycle, security. They also need to think about compatibility with the all of the different container images which will run in the environment. Not just simple web servers, but all of the workloads which will run, ranging from HPC, big data, and DNS, to databases, a wide range of 3rd party applications, and even administrative workloads for troubleshooting the containerized clusters (aka distributed systems). All of these different types of applications are moving to Kubernetes. Workloads drive the need for libraries, language runtimes, and compilers. Sound familiar? Most of these needs are delivered by Linux distributions, like Red Hat Enterprise Linux.
We’ve published a new guide to help you leverage the architectural knowledge you have and apply it as you are building your Kubernetes/OpenShift environment. If you have questions, please post them below, and we will be happy to help guide you:
For many organizations, IT modernization begins with the operating system. In the last few years, migrating workloads to Linux from RISC systems has accelerated as organizations seek to take advantage of the potential price/performance advantage of x86 blade hardware solutions. However, as open source becomes more pervasive, many enterprises are realizing additional benefits. Not only can enterprises reduce (or in some cases eliminate) their reliance on legacy systems by
Continue reading “It’s time to modernize: Your UNIX alternative with Red Hat Enterprise Linux and Microsoft Azure”
While performance benchmarks are often application or industry specific they can also provide useful insights that are widely applicable. Risk analytics applications used in financial services industries have performance characteristics similar to many technical computing applications. These applications are large, compute intensive, and take full advantage of parallel processing and compute accelerators.
STAC®, the Securities Technology Analysis Center LLC (www.STACresearch.com), provides technology research and testing tools including
Continue reading “Red Hat and Partners Deliver New Performance Records on Prominent Risk Analytics Benchmark”
In our first post of discussing Red Hat’s multi-architecture strategy, we focused on the disruptive nature of enabling new and rapidly-evolving architectures and how this enablement necessitates a different set of product requirements to fulfill our vision of providing a consistent and familiar experience to our customers across multiple hardware architectures. While we have been working with many original equipment manufacturers (OEMs) on x86_64-based servers for years, we have seen interest from our customer base in delivering parity across multiple architectures, including IBM Power Little Endian (ppc64le) and ARMv8-A (aarch64).
So what exactly are we doing with our partners to make this
Continue reading “Keeping pace with multiple architectures (Part 2)”
Enrolling a client system into Identity Management (IdM) can be done with a single command, namely: ipa-client-install. This command will configure SSSD, Kerberos, Certmonger and other elements of the system to work with IdM. The important result is that the system will get an identity and key so that it can securely connect to IdM and perform its operations. However, to get the identity and key, the system should
Continue reading “Understanding Identity Management Client Enrollment Workflows”
In this article I want to talk about a runC container which I want to migrate around the world while clients stay connected to the application.
In my previous Checkpoint/Restore In Userspace (CRIU) articles I introduced CRIU (From Checkpoint/Restore to Container Migration) and in the follow-up I gave an example how to use it in combination with containers (Container Live Migration Using runC and CRIU). Recently Christian Horn published an additional article about CRIU which is also a good starting point.
In my container I am running Xonotic. Xonotic calls itself ‘The Free and Fast Arena Shooter’. The part that is running in the container is the server part of the game to which multiple clients can connect to play together. In this article the client is running on my local system while the server and its container is live migrated around the world.
This article also gives detailed background information about
Continue reading “Container Migration Around The World”
In the previous post I talked about Smart Card Support in Red Hat Enterprise Linux. In this article I will drill down into how to select the right deployment architecture depending on your constraints, requirements and availability of the smart card related functionality in different versions of Red Hat Enterprise Linux.
To select the right architecture for a deployment where users would authenticate using smart cards when logging into Linux systems you need to
Continue reading “Picking your Deployment Architecture”
Red Hat Product Security was made aware of a vulnerability affecting the Linux kernel’s implementation of the Bluetooth L2CAP protocol. The vulnerability was named BlueBorne and was assigned an ID – CVE-2017-1000251.
A vulnerable system would need to have Bluetooth (hardware + service) enabled and an attacking device would need to be within
Continue reading “BlueBorne – An Analysis”
Recent Red Hat Enterprise Linux releases see an expansion in support of the smart card related use cases. However customers usually have a mixed environment and standardize on a specific version of Red Hat Enterprise Linux for period of time. It is important to understand the
Continue reading “Smart Card Support in Red Hat Enterprise Linux”