Red Hat Enterprise Linux Atomic Host is a small footprint, purpose-built version of Red Hat Enterprise Linux that is designed to run containerized workloads. Building on the success of our last release, Red Hat’s Atomic-OpenShift team is excited to announce the general availability of Red Hat Enterprise Linux Atomic Host 7.2.6. This release features improvements in rpm-ostree, cockpit, skopeo, docker, and the atomic CLI. The full release notes can be found here. This post is going to explore a major new feature
Continue reading “Announcing Red Hat Enterprise Linux Atomic Host 7.2.6”
As the number of production deployments of Identity Management (IdM) grows and as many more pilots and proof of concepts come into being, it becomes (more and more) important to talk about best practices. Every production deployment needs to deal with things like failover, scalability, and performance. In turn, there are a few practical questions that need to be answered, namely:
- How many replicas do I need?
- How should these replicas be distributed between my datacenters?
- How should these replicas be connected to each other?
The answer to these questions depends on
Continue reading “Thinking Through an Identity Management Deployment”
In November 2015, I blogged about the announcement to bring .NET to RHEL from the .NET Core upstream project to enterprise customers and developers, both as an RPM and as a Linux container. That was quite a moment for the industry and, quite frankly, for me as well, having participated in the discussions that led to the significant announcement with Microsoft. Since then, we have been in tight collaboration to make sure this day would actually arrive. Despite the usual challenges with a relatively new open source project, the project was
Continue reading “.NET Core on Red Hat Enterprise Linux”
If you’re heading to DockerCon 16 next week in Seattle, connect with us to see why Fortune 500 organizations trust Red Hat for enterprise deployments. Red Hat subject matter experts will be onsite to walk you through real-world use cases for securely developing, deploying and managing container-based applications.
Attend the State of Container Security Session
Join two of Red Hat’s Docker contributors discussing the state of container security today. Senior Software Engineer Mrunal Patel and Thomas Cameron, Global Evangelist of Emerging Technology are presenting on how you can secure your containerized microservices without slowing down development.
Continue reading “Red Hat at DockerCon 16 in Seattle”
It’s been just over three years since Solomon Hykes presented the world with the (so far) most creative way to use the tar command: the Docker project. Not only does the project combine existing container-technologies and make them easier to use, but its well-timed introduction drove an unprecedented rate of adoption for new technology.
Did people run containers before the Docker project? Yes, but it was harder to do so. The broader community was favoring LXC, and Red Hat was working on a libvirt-based model for Red Hat Enterprise Linux. With OpenShift 2, Red Hat had already been running containers in production for several years – both in an online PaaS as well as on-premise for enterprise customers. The model pre-Docker however was fundamentally different from what we are seeing today: rather than enabling completely independent runtimes inside the containers, the approach in
Continue reading “In Defense of the Pet Container, Part 1: Prelude – The Only Constant is Complexity”
Not long ago, Intel introduced a new Xeon processor platform to enable faster computing for the enterprise world. Codenamed Broadwell, this architecture brought additional cores to the chip and many improvements, from faster memory support to various security enhancements. As with three generations of Intel Xeon processors before this one, these benefits span beyond simple increases in transistor counts or the number of cores within each processor.
Today, Intel launched the Intel Xeon E7 v4 processor family, a high-end, enterprise-focused class of processors based on Broadwell architecture and targeted at large systems with four or more CPUs. Accompanying the launch are several new world record industry-standard benchmarks; this is where things like increased memory capacity or larger on-chip caches benefit overall system performance, resulting in the highest reported scores on various standard benchmarks. The Xeon E7 v4 launch, along with other announcements like it, typically send a ripple of innovation throughout Red Hat’s partner ecosystem in the form of new and improved performance results. The ability to support these partners is of paramount importance to Red Hat and, as a result, Red Hat Enterprise Linux is often selected by these ongoing benchmarking efforts.
Here is how Red Hat Enterprise Linux scored this time:
Continue reading “Red Hat Delivers High Performance on Critical Enterprise Workloads with the Latest Intel Xeon E7 v4 Processor Family”
In Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization we investigated the level of effort necessary to containerize different types of workloads. In this article I am going to address several challenges facing organizations that are deploying containers – how to patch containers and how to determine which teams are responsible for the container images. Should they be controlled by development or operations?
In addition, we are going to take a look at
Continue reading “Architecting Containers Part 5: Building a Secure and Manageable Container Software Supply Chain”
Many development and operations teams are looking for guidelines to help them determine what applications can be containerized and how difficult it may be. In Architecting Containers Part 3: How the User Space Affects Your Applications we took an in depth look at how the user space affects applications for both developers and operations. In this article we are going to take a look at workload characteristics and the level of effort required to containerize different types of applications.
The goal of this article is to provide guidance based on current capabilities and best practices within
Continue reading “Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization”
Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly. CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was
Continue reading “How Badlock Was Discovered and Fixed”
In our previous posts, we’ve explored the Red Hat container ecosystem, the Red Hat Container Development Kit (CDK), OpenShift as a local deployment and OpenShift in production. In this final post of the series, we’re going to take a look at how a team can take advantage of the advanced features of OpenShift in order to automatically move new versions of applications from development to production — a process known as Continuous Delivery (or Continuous Deployment, depending on the level of automation).
Continue reading “Continuous Delivery / Deployment with OpenShift Enterprise”