Understanding Identity Management Client Enrollment Workflows

Enrolling a client system into Identity Management (IdM) can be done with a single command, namely: ipa-client-install. This command will configure SSSD, Kerberos, Certmonger and other elements of the system to work with IdM. The important result is that the system will get an identity and key so that it can securely connect to IdM and perform its operations. However, to get the identity and key, the system should

Continue reading “Understanding Identity Management Client Enrollment Workflows”

We’re changing up our marketing approach. And it involves comic books.

We’re adopting a new marketing mantra for Red Hat Enterprise Linux: Listen. Learn. Build. Which probably doesn’t seem all that revolutionary. That’s pretty much the mantra of open source. But compare that to how tech marketing usually happens.

There’s a lot of building–assets and advertisements and the whole nine yards. But the listening and learning parts usually happen afterwards, if at all.

So we’re making a conscious effort to explicitly apply the principles of open source to the way that we market our flagship open source technology. We’re starting with the listening part.

And who exactly are we listening to? You.

And what exactly are we listening to you talk about? Your OS adventures.

And what exactly do we mean by “OS adventures”?–

–Actually, here’s a better idea. Instead of telling you what we’re doing and why, let’s show you…

Continue reading “We’re changing up our marketing approach. And it involves comic books.”

Supercomputing & Red Hat: What’s Happening at ISC 2017?

Twice a year the most prominent supercomputing sites in the world get to showcase their capabilities and compete for a Top500 spot. With Linux dominating the list, Red Hat is paying close attention to the latest changes that will be announced at International Supercomputing (ISC) show in Frankfurt, Germany between June 18 to June 22, 2017.

While supercomputers of the past were often proprietary, the trend of building them out of commodity components has dominated the landscape in the past two decades. But recently the definition of “commodity“ in HPC has been morphing. Traditional solutions are routinely augmented by various acceleration technologies, cache-coherent interconnects are becoming mainstream and boutique hardware and software technologies previously reserved for highly specialized solutions are being adopted by major HPC sites at scale.

Developing new and adapting existing highly scalable applications to take advantage of the new technological advances across multiple deployment domains is the greatest challenge facing HPC sites. This is where the operating system can provide

Continue reading “Supercomputing & Red Hat: What’s Happening at ISC 2017?”

Red Hat Virtualization 4.1 is LIVE!

Today marks another milestone in the evolution of our flagship virtualization platform, Red Hat Virtualization (RHV), as we announce the release of version 4.1. There are well over 165 new features, and while I don’t have the space to cover all of the new features, I would like to highlight some of them, especially in the area of integration. But first I’d like to put that integration into perspective.

Virtualization remains foundational and firmly rooted in the modern data center. Whether a particular application is better suited for “scale up” or virtualization simply fits the business and technology model for a given data center, virtualization as an infrastructure platform is not going away anytime soon.

Continue reading “Red Hat Virtualization 4.1 is LIVE!”

Red Hat IT runs OpenShift Container Platform on Red Hat Virtualization and Ansible

Red Hat IT makes extensive use of our own product offerings to effectively manage and to scale our large IT infrastructure. Red Hat Virtualization plays a key role in Red Hat’s overall IT infrastructure, as mentioned in a recent blog by the head of our IT Platform Operations team, Anderson Silva: Red Hat Keeps the Lights on with Red Hat Virtualization

Continue reading “Red Hat IT runs OpenShift Container Platform on Red Hat Virtualization and Ansible”

In Defense of the Pet Container, Part 3: Puppies, Kittens and… Containers

In our third and final installment (see: part one & part two), let’s take a look at some high-level use cases for Linux containers as well as finally (finally) defending what I like to call “pet” containers. From a general perspective, we see three repeated high-level use cases for containerizing applications:

  1. The fully orchestrated, multi-container application as you would create in OpenShift via the Red Hat Container Development Kit;
  2. Loosely orchestrated containers that don’t use advanced features like application templates and Kubernetes; and
  3. Pet containers.

Continue reading “In Defense of the Pet Container, Part 3: Puppies, Kittens and… Containers”

Red Hat at DockerCon 16 in Seattle

If you’re heading to DockerCon 16 next week in Seattle, connect with us to see why Fortune 500 organizations trust Red Hat for enterprise deployments. Red Hat subject matter experts will be onsite to walk you through real-world use cases for securely developing, deploying and managing container-based applications. 

Attend the State of Container Security Session

Join two of Red Hat’s Docker contributors discussing the state of container security today. Senior Software Engineer Mrunal Patel and Thomas Cameron, Global Evangelist of Emerging Technology are presenting on how you can secure your containerized microservices without slowing down development.

Continue reading “Red Hat at DockerCon 16 in Seattle”

Container Tidbits: Does The Pets vs. Cattle Analogy Still Apply?

Background

So, most of us have heard the pets vs. cattle analogy. The saying goes, that in a cloud environment, you don’t waste time fixing individual virtual machines or containers – instead – you just delete them and re-provision. But, does this apply to the entire cloud environment? The analogy is that you don’t take cattle to the vet, you just send them to slaughter. But, is this really true? Cattle are worth a lot of money. I have never really liked the pets vs. cattle analogy.  I think it lacks sensitivity and may not be appropriate when talking to a CIO.  The real problem, however, is that the analogy fails to fully relate the changes in IT that are happening.

I propose that Pets vs. cattle is not really about how or when we kill animals – instead it’s about the simplicity of consuming animals, and the complexity of maintaining the environment in which they live.

Pets

At the end of the day – in small quantities, pets are actually quite easy to take care of – when they are young, you take them to the vet for their shots. As they grow, you provide them with food, water, and a clean litter box (or take them outside once in awhile) and they are pretty much “good to go”.

Like pets, you give traditional virtual machines their “shots” when they are first created (via Puppet, Chef, Ansible, or through manual updates) and they are pretty much “good to go”.  Of course, if they get “sick”, you take virtual machines to “the vet” – you log into them, troubleshoot problems, fix problems, or run update scripts. Usually by hand, or driven by some automation, but managed individually.

The problem is, raising pets in a house doesn’t scale. I don’t want 2000 cats and dogs at my house (and, let’s be honest, neither do you).

Cattle

Raising cattle is quite different than a household pet. It’s actually quite a bit more complex. Cows, sheep, and chickens are raised on farms because it’s more efficient. Farms are set up to handle the scale. This requires large amounts of land, tractors, fences, silos for grain/feed, specialized trailers for your truck, specialized train cars, and specialized processing plants. In addition, farms have to keep shifting which fields are used for grazing so that they don’t become unusable over time.  If you really think about it – I’m only just skimming the surface. Farms are more efficient, but quite a bit more expensive than a house to run day to day.

Clouds (e.g. OpenStack, OpenShift) are more akin to farms than houses. Firing up a cloud is like setting up a farm from scratch. It requires a lot of planning and execution. After firing up your cloud, there is constant technical care and maintenance – e.g. adding/removing storage – fixing hung instances – adding/removing VLANS – fixing pods stuck in a pending state, returning highly available services (Cinder, API nodes, OSE/Kube Master, Hawkular Metrics) back to production, upgrading the cloud platform, etc. etc. There is a lot of farm work with a cloud.

Farms are quite efficient at raising thousands of animals. I do not think, however, that you just tear down an entire farm when it is no longer running in an optimal state – instead – you fix it.  Clouds are quite similar. Clouds are more work for operators, but less work for developers. Just like farms are a lot of work for farms, but much less work for shoppers at the store.  Raising large amounts of chicken is harder for farmers and easier for consumers. The farmers hide the complexity from consumers.

Conclusion

I propose that it’s not really about pets vs. cattle, but really about houses vs. farms. It’s far easier to buy chicken breast at the store than it is to raise hundreds of chickens in your backyard. I propose this as an improved analogy. Farms require quite a bit of work, are sophisticated and more expensive than a house, but quite efficient at supporting a lot more animals. At scale, I would take a farm any day over raising thousands of animals at my house. The same is true with a cloud environment. At scale, a cloud wins every time.

On a side note, people often conflate the notion of scale up and scale out with pets vs. cattle. In my mind, bigger and smaller bulls (scale up/down) or a greater number of smaller bulls (scale out) is arbitrary and a constant challenge in terms of both pets and cattle….

Finally, for those that still don’t like pets vs. cattle or houses vs. farms – let’s try a beer analogy. Bottles vs. home brew – while it’s easy to drop by the store and buy a bottle of beer… it’s way more fun to brew it. Let’s brew some beer together, leave a comment below!

Announcing “Yum + RPM for Containerized Applications” — Nulecule & Atomic App

The promise of Docker is that it simplifies application deployment, allows greater application density on hosts, and features a portable format that offers unparalleled flexibility over standard packaging. But one thing Docker doesn’t get you is the simplicity of `yum install foo` to install an application. Nor can Docker define or process a directed graph of container orchestration dependencies. We aim to change that.

Continue reading “Announcing “Yum + RPM for Containerized Applications” — Nulecule & Atomic App”