In the world of heterogeneous data centers – having multiple operating systems running on different hardware platforms (and architectures) is the norm. Even traditional applications and databases are being migrated or abstracted using Java and other interpreted languages to minimize the impact on the end user, if they decide to run on a different platform.
Consider the common scenario where you have both Windows and Linux running in the data center and you need your Linux application to talk to Microsoft SQL Server and get some existing data from it. Your application would need to connect to the Windows server that is running the SQL Server database using one of many available APIs and request information.
While that may sound trivial, in reality you need to: know where that system is located, authenticate your application against it, and pay the penalty of traversing one or more networks to get the data back – all while the user is waiting. This, in fact, was “the way of the world” before Microsoft announced their intent to port MS SQL server to Linux in March of 2016. Today, however, you have a choice of having your applications connect to a Microsoft SQL Server that runs on either Windows or Linux
Continue reading “Microsoft, Red Hat, and HPE Collaboration Delivers Choice & Value to Enterprise Customers”
In my previous article I wrote about how it was possible to move from checkpoint/restore to container migration with CRIU. This time I want to write about how to actually migrate a running container from one system to another. In this article I will migrate a runC based container using runC’s built-in CRIU support to checkpoint and restore a container on different hosts.
I have two virtual machines (rhel01 and rhel02) which are hosting my container. My container is running Red Hat Enterprise Linux 7 and is located on a shared NFS, which both of my virtual machines have mounted. In addition, I am telling runC to mount the container
Continue reading “Container Live Migration Using runC and CRIU”
As the number of production deployments of Identity Management (IdM) grows and as many more pilots and proof of concepts come into being, it becomes (more and more) important to talk about best practices. Every production deployment needs to deal with things like failover, scalability, and performance. In turn, there are a few practical questions that need to be answered, namely:
- How many replicas do I need?
- How should these replicas be distributed between my datacenters?
- How should these replicas be connected to each other?
The answer to these questions depends on
Continue reading “Thinking Through an Identity Management Deployment”
There are two supported protocols in Red Hat Enterprise Linux for synchronization of computer clocks over a network. The older and more well-known protocol is the Network Time Protocol (NTP). In its fourth version, NTP is defined by IETF in RFC 5905. The newer protocol is the Precision Time Protocol (PTP), which is defined in the IEEE 1588-2008 standard.
The reference implementation of NTP is provided in the ntp package. Starting with Red Hat Enterprise Linux 7.0 (and now in Red Hat Enterprise Linux 6.8) a more versatile NTP implementation is also provided via the chrony package, which can usually synchronize the clock with better accuracy and has other advantages over the reference implementation. PTP is implemented in the linuxptp package.
With two different protocols designed for synchronization of clocks, there is an obvious question as to which one is
Continue reading “Combining PTP with NTP to Get the Best of Both Worlds”
Not long ago, Intel introduced a new Xeon processor platform to enable faster computing for the enterprise world. Codenamed Broadwell, this architecture brought additional cores to the chip and many improvements, from faster memory support to various security enhancements. As with three generations of Intel Xeon processors before this one, these benefits span beyond simple increases in transistor counts or the number of cores within each processor.
Today, Intel launched the Intel Xeon E7 v4 processor family, a high-end, enterprise-focused class of processors based on Broadwell architecture and targeted at large systems with four or more CPUs. Accompanying the launch are several new world record industry-standard benchmarks; this is where things like increased memory capacity or larger on-chip caches benefit overall system performance, resulting in the highest reported scores on various standard benchmarks. The Xeon E7 v4 launch, along with other announcements like it, typically send a ripple of innovation throughout Red Hat’s partner ecosystem in the form of new and improved performance results. The ability to support these partners is of paramount importance to Red Hat and, as a result, Red Hat Enterprise Linux is often selected by these ongoing benchmarking efforts.
Here is how Red Hat Enterprise Linux scored this time:
Continue reading “Red Hat Delivers High Performance on Critical Enterprise Workloads with the Latest Intel Xeon E7 v4 Processor Family”
Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management. While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.
The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.
A simple solution is to configure
Continue reading “Steps to Optimize Network Quality of Service in Your Data Center”
Some time ago, two different projects were started in the open source community, namely: Ipsilon and Keycloak. These projects were started by groups with different backgrounds and different perspectives. In the beginning, it seemed like these two projects would have very little in common… though both aimed to include
Continue reading “Red Hat Federation Story: Ipsilon & Keycloak… a “Clash of the Titans””
Woah. 2015 went by really quickly. I do suppose it’s not all that surprising as time flies… especially when you’re having fun or… getting older (you pick). In fact, we’ve already put 2 percent of 2016 behind us! That said, before we get too deep into “the future”, and in consideration of Janus having not one but two faces, let’s take a quick trip down memory lane…
Without a doubt, 2015 was an exciting year for all things “container”, especially here at Red Hat.
To recap, the year started off with a bang when we announced the general availability of Red Hat Enterprise Linux Atomic Host alongside Red Hat Enterprise Linux 7.1. Then – less than two months later
Continue reading “Looking Back on Containers in 2015”
Over the past few decades we have seen great advancements in the IT industry. In fact, the industry itself seems to be growing at an increasingly faster pace. However, as the industry grows so to does its evil twin – the figurative sum of all threats to IT security.
On the bright side, along with a steady stream of ever-evolving security issues and threats, there has also been a great effort to mitigate and, when possible, entirely eliminate such threats. This is accomplished by either fixing the bugs that allowed these issues and threats to exist (in the first place) or by fixing the configurations and protectionary mechanisms of systems so as to prevent attackers from finding success.
As 2015 has been no stranger to news stories about data leakages, various security flaws, and new types of malware – one could easily conclude that “the dark side” is winning this seemingly eternal race.
However, taking the complexity of today’s IT solutions into account
Continue reading “Configuring and Applying SCAP Policies During Installation”
We are pleased to announce the release of Red Hat Certificate System 9. Supported on Red Hat Enterprise Linux 7.1 and based on the open source PKI capabilities of the Dogtag Certificate System, Red Hat Certificate System 9 provides a robust and flexible set of features to support Certificate Life Cycle Management. It is able to issue, renew, suspend, revoke, archive/recover, and manage the single and dual-key X.509v3 certificates needed to handle strong authentication, single sign-on, and secure communications. Red Hat Certificate System 9 incorporates several new and enhanced features, including
Continue reading “Red Hat Certificate System 9 Now Available”