Red Hat and Cisco have a long history of offering joint solutions that benefit our mutual customers and address a gamut of IT challenges, from server sprawl to cloud computing. Both companies consistently foster technological innovation and work towards breaking new ground in computing, including a history of driving world-record performance across a wide range of industry-standard benchmarks.
Industry standard performance benchmarking, driven by groups like TPC and SPEC, goes all the way back to 1988. Many of these benchmarks have driven the development of faster, cheaper, and more efficient computer technologies over the course of the past quarter century.
With over a hundred of benchmark records to its name, Red Hat Enterprise Linux is known to power some of the most (more…)
This post is the second in a series of blog posts about integrating Linux systems into Active Directory environments. In the previous post we discussed dishwashers and, more seriously, some basic principles. In this post I will continue by exploring how the integration gap between Linux systems and Active Directory emerged, how it was formerly addressed, and what options are available now.
Let’s start with a bit of history… before the advent of Active Directory, Linux and UNIX systems had developed ways to connect to, and interact with, a central LDAP server for identity look-up and authentication purposes. These connections were basic, but as the environments were not overly complex (in comparison to modern equivalents) – they were good enough for the time. Then… AD was born.
Active Directory not only integrated several services (namely: LDAP, Kerberos, and DNS) under one hood, but it also (more…)
Have you ever purchased a new dishwasher? For those of you who have, you know that the dishes don’t get washed until your “purchase” is picked-up/delivered, the old dishwasher is removed, and the new unit is hooked-up. In fact, until the new dishwasher is hooked-up, it simply doesn’t work. The dishwasher can be smart, stylish, noiseless, and/or energy-efficient… but none of this matters if it’s not properly connected. At the end of the day, if you want to enjoy the luxury of automatic dish washing, one thing is clear: your new dishwasher needs to be hooked-up.
The act of hooking-up a dishwasher is not unlike adding a Linux system to an existing enterprise IT environment. When you deploy a Linux system, it too needs to be “hooked-up”. As the data that flows through your environment consists of different kinds of objects (e.g. users, groups, hosts, and services) the associated identity information is not unlike the water in your dishwasher. Without this identity information (more…)
The memory subsystem is one of the most critical components of modern server systems–it supplies critical run-time data and instructions to applications and to the operating system. Red Hat Enterprise Linux provides a number of tools for managing memory. This post illustrates how you can use these tools to boost the performance of systems with NUMA topologies. (more…)
The OpenShift Online Technical Operations team was looking forward to the beta availability of Red Hat Enterprise Linux Atomic Host. In fact, they participated in early sprints as part of the Atomic Special Interest Group (SIG) to help make sure Red Hat Enterprise Linux Atomic Host had the operational “beef” to stand high alongside Red Hat’s other enterprise products. Part of this process led to us running the unreleased bits in OpenShift Online prior to the beta announcement.
That said, we’re not using it to run some corner niche of our infrastructure. Instead, we are using the Red Hat Enterprise Linux Atomic Host + Docker combo to run our reverse proxy tier. This means that every API, www.openshift.com, and web console request made to OpenShift Online runs through this tier.
So why all the interest? The small size of Red Hat Enterprise Linux Atomic Host is the (more…)
In November we announced Red Hat Enterprise Linux 7 Atomic Host Public Beta, a small footprint, container host based on Red Hat Enterprise Linux 7. It provides a stable host platform, optimized for running application containers, and brings a number of application software packaging and deployment benefits to customers.
What are the top 7 reasons to deploy containers on Red Hat Enterprise Linux 7 Atomic Host? (more…)
Applications don’t always work as expected, and “it works fine on my machine” — the first line of response when reporting an issue — has been around for decades. One way to avoid the challenge of application issues in production is to maintain identical environments for development, testing, and production. Another is to create a Continuous Integration environment, where code is compiled and deployed to test machines and vetted with each and every code check-in, long before being pushed to production.
Enter containers. (more…)
Several weeks ago Red Hat and Cisco collaborated on a whitepaper for IT leaders and industry analysts on Linux containers. The following is an excerpt from the first page:
“Linux containers and Docker are poised to radically change the way applications are built, shipped, deployed, and instantiated. They accelerate application delivery by making it easy to package applications along with their dependencies. As a result, the same containerized application can operate in different development, test, and production environments. The platform can be a physical server, virtual server, public cloud, or network device.”
Interested in reading more? Click (more…)
It’s been one week since we announced the beta for Red Hat Enterprise Linux 7 Atomic Host and we’re looking for your feedback. If you’ve downloaded and installed the beta, this is your chance to tell us what you think, and what you’d like to see in the product moving forward.
Red Hat Enterprise Linux 7 Atomic Host Beta is an operating platform that is optimized and minimized to run containers. It packages key components of Red Hat Enterprise Linux 7 such as SELinux, systemd, and tuned with the kernel to facilitate running containers in a secure and optimized manner. It also offers Kubernetes and Docker to facilitate the rapid creation, deployment, and orchestration of containers – simplifying the life cycle management of applications and systems.
Containers allow users to put application and all of their runtime dependencies into secure packages that are both easy to deploy and easy to manage. Containers are also portable and images of a given container can be copied and replicated to other systems. Since containers are isolated from each other and are isolated from the host OS, libraries and application binaries can be updated individually without affecting other containers or the host OS (and vice versa).
The following video (below) mirrors the demo as presented (more…)