Bringing Intelligence to the Edge

In my last post, we discussed how the needs of an enterprise-grade Internet of Things (IoT) solution require a more diligent approach than what’s involved when putting together a Proof of Concept (PoC). In this post, we’ll explore how businesses can leverage their existing infrastructure to create scalable IoT deployments.

While my previous post reviewed a “list of ingredients” needed to build out an industrial-grade IoT solution, the massive scale and reach of IoT solutions for businesses requires some additional considerations, namely

Continue reading “Bringing Intelligence to the Edge”

IoT in Enterprise: Scaling from Proof of Concept to Deployment

The Internet of Things (IoT) is gaining steam as businesses across various industries launch projects that instrument, gather, and analyze data to extract value from various connected devices.  While the general vision for IoT may be same – each company is pursuing its own unique approach on how to go about it. The adoption of standards and emergence of industry leaders will help the “wild west” situation we’re in but it is still unknown how long it will take to get there. How should businesses implement their IoT solutions in a way that will allow them flexibility and control no matter what the eventual IoT landscape looks like?

It is relatively easy to put together an IoT solution using

Continue reading “IoT in Enterprise: Scaling from Proof of Concept to Deployment”

Red Hat Hyperconverged Solutions

Hyperconvergence is a key topic in IT planning across industries today. As customers look to lower costs and simplify day to day management of their IT operations, the hyperconverged model emerges as fit in a number of operational use cases.

Convergence began at the hardware level, with compute, network, and storage appearing in consolidated platforms, but it’s now accelerating as hyperconvergence goes “software defined”. As a leading software infrastructure stack provider, Red Hat recognizes that reducing the overall moving parts in your infrastructure and simplifying the procurement and deployment processes are core requirements of the next generation elastic datacenter.

Applying a solutions-aligned lens, Red Hat is innovating software defined compute-storage solutions across the portfolio, designed to meet the needs of a broad customer base with diverse requirements. As a vendor-partner in this journey, we recognize the value of bringing storage close to your compute and eliminating the need for discreet storage tier. Doing so across both traditional virtualization and cloud, as well as containers and leveraging our industry-proven software defined storage assets – Red Hat Gluster and Red Hat Ceph Storage – we’ve defined a robust set of efficient, solution-aligned hyperconverged offerings.

This blog provides a short overview of several areas where we see hyperconverged software defined architectures aligning with use cases, with a focus on

Continue reading “Red Hat Hyperconverged Solutions”

Self-Service Portals and Virtualization

There have been countless advances in technology in the last few years; both in general and at Red Hat. To list just the ones specific to Red Hat could actually boggle the mind. Arguably, some of the biggest advances have come more in the form of “soft” skills. Namely, Red Hat has become really good at listening – not only to our own customers but to our competitors’ customers as well. This is no more apparent than in our approach to applying a self-service catalog to virtualization. Specifically, pairing Red Hat Enterprise Virtualization (RHEV) with CloudForms for the purpose of streamlining and automation of virtual machine provisioning.

Continue reading “Self-Service Portals and Virtualization”

Container Image Signing

Red Hat engineers have been working to more securely distribute container images. In this post we look at where we’ve come from, where we need to go, and how we hope to get there.

History

When the Docker image specification was introduced it did not have a cryptographic verification model. The most significant reason (for not having one) was the lack of a reliable checksum hash of image content. Two otherwise identical images could have different checksum values. Without a consistent tarsum mechanism, cryptographic verification would be very challenging. With Docker version 1.10, checksums are more consistent and could be used as a stable reference for

Continue reading “Container Image Signing”

Thinking Through an Identity Management Deployment

As the number of production deployments of Identity Management (IdM) grows and as many more pilots and proof of concepts come into being, it becomes (more and more) important to talk about best practices. Every production deployment needs to deal with things like failover, scalability, and performance.  In turn, there are a few practical questions that need to be answered, namely:

  • How many replicas do I need?
  • How should these replicas be distributed between my datacenters?
  • How should these replicas be connected to each other?

The answer to these questions depends on

Continue reading “Thinking Through an Identity Management Deployment”

Combining PTP with NTP to Get the Best of Both Worlds

There are two supported protocols in Red Hat Enterprise Linux for synchronization of computer clocks over a network. The older and more well-known protocol is the Network Time Protocol (NTP). In its fourth version, NTP is defined by IETF in RFC 5905. The newer protocol is the Precision Time Protocol (PTP), which is defined in the IEEE 1588-2008 standard.

The reference implementation of NTP is provided in the ntp package. Starting with Red Hat Enterprise Linux 7.0 (and now in Red Hat Enterprise Linux 6.8) a more versatile NTP implementation is also provided via the chrony package, which can usually synchronize the clock with better accuracy and has other advantages over the reference implementation. PTP is implemented in the linuxptp package.

With two different protocols designed for synchronization of clocks, there is an obvious question as to which one is

Continue reading “Combining PTP with NTP to Get the Best of Both Worlds”

Choosing a Platform Based on Workload Characteristics

Paradoxically, there has never been a better or more confusing time to discuss which platform is most appropriate for a given workload.  As we seek to solve problems around automation, continuous integration / continuous delivery, ease of upgrades, operational complexity, uptime, compliance, and many other complex issues – it quickly becomes clear that there are more than a few viable options.  Making matters worse – there is too much focus on the “how” (to adopt a given platform) and not enough focus onthe “why”. To this end, I’d like to address more of the “why” in an attempt to better influence the “how”.

Continue reading “Choosing a Platform Based on Workload Characteristics”

I Really Can’t Rename My Hosts!

Hello again! In this post I will be sharing some ideas about what you can do to solve a complex identity management challenge.

As the adoption of Identity Management (IdM) grows and especially in the case of heterogeneous environments where some systems are running Linux and user accounts are in the Active Directory (AD) – the question of renaming hosts becomes more and more relevant. Here is a set of requirements that we often hear from customers

Continue reading “I Really Can’t Rename My Hosts!”

In Defense of the Pet Container, Part 2: Wrappers, Aggregates and Models… Oh My!

In our first post defending the pet container, we looked at the challenge of complexity facing modern software stacks and one way that containers address this challenge through aggregation. In essence, the Docker “wrapper” consolidates the next level of the stack, much like RPM did at the component level, but aggregation is just the beginning of what the project provides.

If we take a step back and look at the Docker project in context, there are four aspects that contribute to its exceptional popularity:

  1. it simplifies the way users interact with the kernel, for features we have come to call Linux containers;
  2. it’s a tool and format for aggregate packaging of software stacks to be deployed into containers;
  3. it is a model for layering generations of changes on top of each other in a single inheritance model;
  4. it adds a transport for these aggregate packages.

Continue reading “In Defense of the Pet Container, Part 2: Wrappers, Aggregates and Models… Oh My!”