As mentioned in my previous post there are multiple ways to connect a Linux system to Active Directory (AD) directly. With this in mind, let us review the following list of options…
The legacy integration option: this is a solution where (likely older) native Linux tools are used to connect to an LDAP server of your choice (e.g. AD).
The traditional integration option: this is a solution based on Samba winbind.
The third-party integration option: this is a solution based on (proprietary) commercial software.
The contemporary integration option: this is a solution based on SSSD.
Legacy Integration Option
In the case of the legacy integration option (see figure above), a Linux system is connected to AD using LDAP for identity lookup and LDAP or Kerberos for authentication. It pretty much solves the problem of basic user authentication. That said, such a solution has the following significant limitations:
Continue reading “Overview of Direct Integration Options”
Linux permanently changed the landscape of the datacenter by creating a community approach to rapid innovation. Its introduction and widespread adoption have fueled a shift from closed to open systems, often times providing greater resiliency than other operating environments. Commodity x86 architectures are only one slice of a much larger market for reliable open source enterprise-class systems – and Linux has for many years been a cross-platform operating system. For example, did you know that Red Hat Enterprise Linux also runs on IBM’s Power Systems (POWER) and z Systems architectures? These options give IT organizations flexibility with respect to hardware for workloads and use cases ranging from big data analytics to cloud computing. Ensuring that Red Hat Enterprise Linux runs on IBM’s Power Systems and z Systems architectures gives our customers a broad range of application and deployment choices.
Red Hat Enterprise Linux for Power and Red Hat Enterprise Linux for System z are built
Continue reading “What’s Moving in the World of POWER?”
Red Hat and Cisco have a long history of offering joint solutions that benefit our mutual customers and address a gamut of IT challenges, from server sprawl to cloud computing. Both companies consistently foster technological innovation and work towards breaking new ground in computing, including a history of driving world-record performance across a wide range of industry-standard benchmarks.
Industry standard performance benchmarking, driven by groups like TPC and SPEC, goes all the way back to 1988. Many of these benchmarks have driven the development of faster, cheaper, and more efficient computer technologies over the course of the past quarter century.
With over a hundred of benchmark records to its name, Red Hat Enterprise Linux is known to power some of the most
Continue reading “Red Hat Enterprise Linux: Powering the World’s First Big Data Benchmark Results with Cisco”
This post is the second in a series of blog posts about integrating Linux systems into Active Directory environments. In the previous post we discussed dishwashers and, more seriously, some basic principles. In this post I will continue by exploring how the integration gap between Linux systems and Active Directory emerged, how it was formerly addressed, and what options are available now.
Let’s start with a bit of history… before the advent of Active Directory, Linux and UNIX systems had developed ways to connect to, and interact with, a central LDAP server for identity look-up and authentication purposes. These connections were basic, but as the environments were not overly complex (in comparison to modern equivalents) – they were good enough for the time. Then… AD was born.
Active Directory not only integrated several services (namely: LDAP, Kerberos, and DNS) under one hood, but it also
The memory subsystem is one of the most critical components of modern server systems–it supplies critical run-time data and instructions to applications and to the operating system. Red Hat Enterprise Linux provides a number of tools for managing memory. This post illustrates how you can use these tools to boost the performance of systems with NUMA topologies.
Continue reading “Mysteries of NUMA Memory Management Revealed”
The OpenShift Online Technical Operations team was looking forward to the beta availability of Red Hat Enterprise Linux Atomic Host. In fact, they participated in early sprints as part of the Atomic Special Interest Group (SIG) to help make sure Red Hat Enterprise Linux Atomic Host had the operational “beef” to stand high alongside Red Hat’s other enterprise products. Part of this process led to us running the unreleased bits in OpenShift Online prior to the beta announcement.
That said, we’re not using it to run some corner niche of our infrastructure. Instead, we are using the Red Hat Enterprise Linux Atomic Host + Docker combo to run our reverse proxy tier. This means that every API, www.openshift.com, and web console request made to OpenShift Online runs through this tier.
So why all the interest? The small size of Red Hat Enterprise Linux Atomic Host is the
Continue reading “How Red Hat Enterprise Linux Atomic Host Powers OpenShift Online”
Applications don’t always work as expected, and “it works fine on my machine” — the first line of response when reporting an issue — has been around for decades. One way to avoid the challenge of application issues in production is to maintain identical environments for development, testing, and production. Another is to create a Continuous Integration environment, where code is compiled and deployed to test machines and vetted with each and every code check-in, long before being pushed to production.
Continue reading “Containers: Stumbling on the Road to Utopia”
Several weeks ago Red Hat and Cisco collaborated on a whitepaper for IT leaders and industry analysts on Linux containers. The following is an excerpt from the first page:
“Linux containers and Docker are poised to radically change the way applications are built, shipped, deployed, and instantiated. They accelerate application delivery by making it easy to package applications along with their dependencies. As a result, the same containerized application can operate in different development, test, and production environments. The platform can be a physical server, virtual server, public cloud, or network device.”
Interested in reading more? Click
Continue reading “Linux Containers: Why They’re in Your Future and What Has to Happen First”
Red Hat Enterprise Linux 7 Atomic Host Beta is an operating platform that is optimized and minimized to run containers. It packages key components of Red Hat Enterprise Linux 7 such as SELinux, systemd, and tuned with the kernel to facilitate running containers in a secure and optimized manner. It also offers Kubernetes and Docker to facilitate the rapid creation, deployment, and orchestration of containers – simplifying the life cycle management of applications and systems.
Containers allow users to put application and all of their runtime dependencies into secure packages that are both easy to deploy and easy to manage. Containers are also portable and images of a given container can be copied and replicated to other systems. Since containers are isolated from each other and are isolated from the host OS, libraries and application binaries can be updated individually without affecting other containers or the host OS (and vice versa).
The following video (below) mirrors the demo as presented
Continue reading “Performance Testing Red Hat Enterprise Linux 7 Atomic Host Beta on Amazon EC2”
Developers and system administrators need better ways to deliver applications with increased speed and flexibility. Linux Containers, when used as an open source application packaging and delivery technology, meet this need by combining lightweight application isolation with the flexibility of an image-based deployment method. Red Hat has been working hard to make container technologies safer and easier to consume for the enterprise. Yesterday, at AWS re:Invent, we continued to make progress by offering attendees a chance to dive deep and develop skills for working with containers on AWS at a technical bootcamp.
This full-day, in-person training session provided a chance for developers and system administrators to learn first-hand from Red Hat knowledge experts and gain skills to deploy container-based applications with AWS. Content included instructor-led presentations and practical exercises, with several hands-on labs.
Through a series of labs
Continue reading “AWS re:Invent Bootcamp Attendees Learn How to Accelerate Development with Linux Containers”