Segregating RHV Networks for the Slightly Paranoid

I recently had the pleasure of linking up with one of my favorite Red Hat colleagues (David “Pinky” Pinkerton) from Australia while we were both in Southeast Asia for a Red Hat event. We both have a propensity for KVM and Red Hat Virtualization (RHV) in particular, and he brought up a fantastic topic – truly segregated networks to support other security requirements. The reason came up because he had a “high security” client that needed to keep different traffic types separated within RHV, as the VMs were used to scan live malware. And that is why I made the comment about the (justifiably) paranoid.

Let’s take a look. |

Continue reading “Segregating RHV Networks for the Slightly Paranoid”

Combining PTP with NTP to Get the Best of Both Worlds

There are two supported protocols in Red Hat Enterprise Linux for synchronization of computer clocks over a network. The older and more well-known protocol is the Network Time Protocol (NTP). In its fourth version, NTP is defined by IETF in RFC 5905. The newer protocol is the Precision Time Protocol (PTP), which is defined in the IEEE 1588-2008 standard.

The reference implementation of NTP is provided in the ntp package. Starting with Red Hat Enterprise Linux 7.0 (and now in Red Hat Enterprise Linux 6.8) a more versatile NTP implementation is also provided via the chrony package, which can usually synchronize the clock with better accuracy and has other advantages over the reference implementation. PTP is implemented in the linuxptp package.

With two different protocols designed for synchronization of clocks, there is an obvious question as to which one is

Continue reading “Combining PTP with NTP to Get the Best of Both Worlds”

Getting the Best of Both Worlds with Queue Splitting (Bifurcated Driver)

The Linux networking stack has many features that are essential for IoT (Internet of Things) and data center networking, such as filtering, connection tracking, memory management, VLANs, overlay, and process isolation. These features come with a small overhead of latency and throughput for tiny packets at line rate.

DPDK (Data Plane Development Kit) allows access to the hardware directly from applications, bypassing the Linux networking stack. This reduces latency and allows more packets to be processed. However, many features that Linux provides are not available with DPDK.

What if there was a way to have ultra low latency and high throughput for some traffic, and full feature-set from Linux networking, all at the same time? This “utopia” is now possible with Queue Splitting (Bifurcated Driver).

Continue reading “Getting the Best of Both Worlds with Queue Splitting (Bifurcated Driver)”

Pushing the Limits of Kernel Networking

Note: The following post was authored by Alexander Duyck before leaving Red Hat earlier this month.  While Alex will be missed, his work continues in the capable hands of the Networking Services team.  To this end, I encourage you to “read on” and learn more about how we’ve turned up the heat on kernel networking with the beta release of Red Hat Enterprise Linux 7.2.


Over the last year I have been working at Red Hat as a part of the Linux Kernel Networking Services Team focused on improving the performance of the kernel networking data path.  Prior to working at Red Hat I had worked at Intel as a driver maintainer for their server drivers including ixgbe.  This has put me in a unique position to be able to provide tuning advice for both the network stack and the Intel device drivers.  Last month, at LinuxCon North America, I gave a presentation that summarizes most of the work that has been done to improve network performance in the last year, and the performance gains as seen by comparing Red Hat Enterprise Linux 7.1 versus an early (alpha) release of Red Hat Enterprise Linux 7.2.  The following is a recap of what I covered.

Continue reading “Pushing the Limits of Kernel Networking”

Precision Time Synchronization & Network Latency

In the second installment of “what’s new” for networking features in Red Hat Enterprise Linux 7, I’d like to shift the focus and spend some time talking about improvements to network time synchronization and enhancements in the network stack designed to reduce latency.  Note that the previous installment of this two part series was posted during the Red Hat Enterprise Linux 7 beta; now that Red Hat Enterprise Linux 7 is generally available I encourage customers to visit the Red Hat Customer Portal, download this latest release, and to let me know what you think.

Highly Accurate Time Synchronization

Precise time synchronization with microsecond or nanosecond accuracy is increasingly critical to the success of many high-speed, low-latency applications. Whether tracking time on trading transactions or market feeds, improving the time stamp accuracy on archived data, or administration-free time synchronization of servers within a rack (or around the world) for automation or regulatory purposes, precision time synchronization is a key enabler for delivering better performance.

New to Red Hat Enterprise Linux 7,

Continue reading “Precision Time Synchronization & Network Latency”

Red Hat Around the Web: Summit Q&A, NetworkManager 0.9.10, and DevOps (Part 3)

Were you able to attend the Red Hat Enterprise Linux roadmap session at this year’s Red Hat Summit?  If not, I have some good news – the slides are still available (here).  In addition, many of the questions that were asked after the presentation were recorded, sorted, and answered… and are now posted on the Red Hat Summit Blog.  Of note:

Continue reading “Red Hat Around the Web: Summit Q&A, NetworkManager 0.9.10, and DevOps (Part 3)”

If You Like Bonding, You Will Love Teaming

In this day and age, where almost everything is connected to the World Wide Web, the demands on networking (in general) are mushrooming. In the developed world it’s common to be able to get 20 megabits per second connections on our mobile devices and 50 megabits per second connections at home. By extension, the demands on enterprise data centers are even higher (by at least three to four orders of magnitude) – as these central “hubs” are where traffic from the aforementioned individual end nodes converge. Consider the act of flipping through a series of cloud-hosted HD photos on a mobile device – this can easily result in billions of packets being transferred (in fractions of a second).

The good news is that our networking interfaces are getting “bigger and faster.” 40 gigabit per second Ethernet is currently being deployed, and work to finalize on 100 gigbit per second end point interfaces is currently underway.

Continue reading “If You Like Bonding, You Will Love Teaming”