Combining PTP with NTP to Get the Best of Both Worlds

There are two supported protocols in Red Hat Enterprise Linux for synchronization of computer clocks over a network. The older and more well-known protocol is the Network Time Protocol (NTP). In its fourth version, NTP is defined by IETF in RFC 5905. The newer protocol is the Precision Time Protocol (PTP), which is defined in the IEEE 1588-2008 standard.

The reference implementation of NTP is provided in the ntp package. Starting with Red Hat Enterprise Linux 7.0 (and now in Red Hat Enterprise Linux 6.8) a more versatile NTP implementation is also provided via the chrony package, which can usually synchronize the clock with better accuracy and has other advantages over the reference implementation. PTP is implemented in the linuxptp package.

With two different protocols designed for synchronization of clocks, there is an obvious question as to which one is

Continue reading “Combining PTP with NTP to Get the Best of Both Worlds”

I Really Can’t Rename My Hosts!

Hello again! In this post I will be sharing some ideas about what you can do to solve a complex identity management challenge.

As the adoption of Identity Management (IdM) grows and especially in the case of heterogeneous environments where some systems are running Linux and user accounts are in the Active Directory (AD) – the question of renaming hosts becomes more and more relevant. Here is a set of requirements that we often hear from customers

Continue reading “I Really Can’t Rename My Hosts!”

In Defense of the Pet Container, Part 2: Wrappers, Aggregates and Models… Oh My!

In our first post defending the pet container, we looked at the challenge of complexity facing modern software stacks and one way that containers address this challenge through aggregation. In essence, the Docker “wrapper” consolidates the next level of the stack, much like RPM did at the component level, but aggregation is just the beginning of what the project provides.

If we take a step back and look at the Docker project in context, there are four aspects that contribute to its exceptional popularity:

  1. it simplifies the way users interact with the kernel, for features we have come to call Linux containers;
  2. it’s a tool and format for aggregate packaging of software stacks to be deployed into containers;
  3. it is a model for layering generations of changes on top of each other in a single inheritance model;
  4. it adds a transport for these aggregate packages.

Continue reading “In Defense of the Pet Container, Part 2: Wrappers, Aggregates and Models… Oh My!”

Identity Management and Application Integration

Identity management solutions integrate systems, services, and applications into a single ecosystem that provides authentication, access control, enterprise SSO, identity information and the policies related to those identities. While I have dedicated time to explaining how to provide these capabilities to Linux systems – it is now time to broaden the scope and talk a little bit about services and applications.

In some ways, services and applications are very similar. They are both usually

Continue reading “Identity Management and Application Integration”

Connecting the Dots at Linaro Connect

Red Hat has long advocated for the importance of cross-industry IT standards, with the intention of enabling ecosystems with broad industry participation and providing a common basis for innovation. Perhaps even more importantly, these standards can help drive adoption of new technologies within enterprises, pushing the cycle of innovation even further along.

With ARM being one of these emerging ecosystems, we wanted to provide a snapshot of a recent event that highlights some of the standards-based work happening in this growing community: last week’s Linaro Connect conference in Bangkok, Thailand.

Continue reading “Connecting the Dots at Linaro Connect”

Schrodinger’s Container: How Red Hat is Building a Better Linux Container Scanner

The rapid rise of Linux containers as an enterprise-ready technology in 2015, thanks in no small part to the technology provided by the Docker project, should come as no surprise: Linux containers offer a broad array of benefits to the enterprise, from greater application portability and scalability to the ability to fully leverage the benefits of composite applications.

But these benefits aside, Linux containers can, if IT security procedures are not followed, also cause serious harm to mission-critical operations. As Red Hat’s Lars Herrmann has pointed out, containers aren’t exactly transparent when it comes to seeing and understanding all of their internal code. This means that tools and technologies to actually see inside a container are critical to enterprises that want to deploy Linux containers in mission-critical scenarios.

Continue reading “Schrodinger’s Container: How Red Hat is Building a Better Linux Container Scanner”

Getting Started: Using Performance Co-Pilot and Vector for Browser Based Metric Visualizations

Performance Co-Pilot (PCP) is an open source, distributed, metrics gathering and analysis system.  In the latest release of Red Hat Enterprise Linux (i.e. 7.2), we’re not only shipping PCP 3.10.6, but a new browser based dashboard, Vector, which is built on top of PCP, and contributed by Netflix.  Together, they can provide a comprehensive overview of a local, or remote machine.

In this tutorial, we’ll be utilizing two different machines to demonstrate

Continue reading “Getting Started: Using Performance Co-Pilot and Vector for Browser Based Metric Visualizations”

High Performance Computing Everywhere for Financial Services (and Beyond)

Information technology has changed every industry in the past 20 years, to the point that IT systems are no longer the domain of just the technologists. Business decision makers are actively involved in the planning, purchasing, and deployment of technologies today. And one of the critical issues for business executives is getting more timely information and greater value from enterprise systems.

Continue reading “High Performance Computing Everywhere for Financial Services (and Beyond)”

The History of Containers

Given the recent massive spike in interest in Linux Containers, you could be forgiven for wondering, “Why now?”. It has been argued that the increasingly prevalent cloud computing model more closely resembles hosting providers than traditional enterprise IT, and that containers are a perfect match for this model.

Despite the sudden ubiquity of container technology, like so much in the world of open source software, containerization depends on a long series of previous innovations, especially in the operating system. “One cannot resist an idea whose time has come.” Containers are such an idea, one that has been a long time coming.

Continue reading “The History of Containers”

See You at ContainerCon in Seattle

If you’re looking at running Linux containers, you should be heading to ContainerCon in Seattle next week. Co-located with LinuxCon and CloudOpen, ContainerCon is where leading contributors in Linux containers, the Linux kernel, and related projects will get together to educate the community on containers and related innovations.

Red Hatters are contributing to over 40 seContainerConssions on this year’s agenda, including a keynote from Red Hat VP of Engineering Matt Hicks. In “Revolutionizing Application Delivery with Linux and Containers,” Matt will focus on how Linux containers are changing the way that companies develop, consume and manage applications and will emphasize how open source communities and projects like Docker and Kubernetes are delivering this next wave of enterprise application architecture.

If you’re attending ContainerCon, check out Matt’s keynote and some of the sessions below:

Continue reading “See You at ContainerCon in Seattle”