Why Now is the Perfect Time to Adopt Red Hat Enterprise Virtualization

IT decision makers seem to be up in arms regarding discussions on “next generation” technologies.  In the past three months it has been nearly impossible to hold a conversation where the terms cloud, OpenStack, or (Linux) containers don’t surface.  Hot topics and buzzwords aside, it has become clear (to me) that the right mix of market conditions are causing organizations to express a renewed interest in enterprise virtualization.

Many organizations are now ready to adopt the next generation of server hardware.  The popular Ivy Bridge and Sandy Bridge chipsets from Intel are four to five years old and those who purchased such hardware tend to refresh their equipment every four to five years.  In addition, we see Intel Haswell technology approaching its third anniversary.  Organizations that lease hardware on a three year cycle will also be looking at what the next generation of hardware has to offer.

What does a potential wave of hardware refresh have to do with a renewed interest in enterprise virtualization?  To no one’s surprise

Continue reading “Why Now is the Perfect Time to Adopt Red Hat Enterprise Virtualization”

Container Tidbits: When Should I Break My Application into Multiple Containers?

There is a lot of confusion around which pieces of your application you should break into multiple containers and why. I recently responded to this thread on the Docker user mailing list which led me to writing today’s post. In this post I plan to examine an imaginary Java application that historically ran on a single Tomcat server and to explain why I would break it apart into separate containers. In an attempt to make things interesting – I will also aim to

Continue reading “Container Tidbits: When Should I Break My Application into Multiple Containers?”

Test Driving OpenShift with the Red Hat Container Development Kit (CDK)

Setting up a local development environment that corresponds as close as possible to production can be a time-consuming and error-prone task. However, for OpenShift deployments, we have the Red Hat Container Development Kit (CDK) which does a good job at solving this and also provides a great environment for experimenting with containers and the Red Hat container ecosystem in general.

In this blogpost we will cover deploying applications using the OpenShift Enterprise PaaS that comes with the CDK. The whole process will be driven via the OpenShift CLI, in contrast to our last post which focused on OpenShift’s web interface. If you haven’t yet installed the CDK, check out the previous blog post for instructions.

By the end of this article you will know how to build existing applications on OpenShift, whether they already use

Continue reading “Test Driving OpenShift with the Red Hat Container Development Kit (CDK)”

Getting Started with the Red Hat Container Development Kit (CDK)

Docker containers are used to package software applications into portable, isolated stores. Developing software with containers helps developers create applications that will run the same way on every platform. However modern microservice deployments typically use a scheduler such as Kubernetes to run in production. In order to fully simulate the production environment, developers require a local version of production tools. In the Red Hat stack, this is supplied by the Red Hat Container Development Kit (CDK).

The Red Hat CDK is a customized virtual machine that makes it easy to run complex deployments resembling production. This means complex applications can be developed using production grade tools from the very start

Continue reading “Getting Started with the Red Hat Container Development Kit (CDK)”

The Red Hat Ecosystem for Microservice and Container Development

Over the last couple years, microservices and containers have started to redefine the software development landscape. The traditional large Java or C# application has been replaced with multiple smaller components (microservices) which coordinate to provide the required functionality. These microservices typically run inside containers, which provide isolation and portability.

This approach has numerous benefits including being able to scale and replace microservices independently as well as reducing the complexity of individual components. However, it also brings more complexity to the system level; it takes extra effort and tooling to manage and orchestrate the microservices and their interactions.

This post will describe how Red Hat technology and services can be used to develop, deploy and run an effective microservice-based system.

Continue reading “The Red Hat Ecosystem for Microservice and Container Development”

Container Tidbits: Does The Pets vs. Cattle Analogy Still Apply?

Background

So, most of us have heard the pets vs. cattle analogy. The saying goes, that in a cloud environment, you don’t waste time fixing individual virtual machines or containers – instead – you just delete them and re-provision. But, does this apply to the entire cloud environment? The analogy is that you don’t take cattle to the vet, you just send them to slaughter. But, is this really true? Cattle are worth a lot of money. I have never really liked the pets vs. cattle analogy.  I think it lacks sensitivity and may not be appropriate when talking to a CIO.  The real problem, however, is that the analogy fails to fully relate the changes in IT that are happening.

I propose that Pets vs. cattle is not really about how or when we kill animals – instead it’s about the simplicity of consuming animals, and the complexity of maintaining the environment in which they live.

Pets

At the end of the day – in small quantities, pets are actually quite easy to take care of – when they are young, you take them to the vet for their shots. As they grow, you provide them with food, water, and a clean litter box (or take them outside once in awhile) and they are pretty much “good to go”.

Like pets, you give traditional virtual machines their “shots” when they are first created (via Puppet, Chef, Ansible, or through manual updates) and they are pretty much “good to go”.  Of course, if they get “sick”, you take virtual machines to “the vet” – you log into them, troubleshoot problems, fix problems, or run update scripts. Usually by hand, or driven by some automation, but managed individually.

The problem is, raising pets in a house doesn’t scale. I don’t want 2000 cats and dogs at my house (and, let’s be honest, neither do you).

Cattle

Raising cattle is quite different than a household pet. It’s actually quite a bit more complex. Cows, sheep, and chickens are raised on farms because it’s more efficient. Farms are set up to handle the scale. This requires large amounts of land, tractors, fences, silos for grain/feed, specialized trailers for your truck, specialized train cars, and specialized processing plants. In addition, farms have to keep shifting which fields are used for grazing so that they don’t become unusable over time.  If you really think about it – I’m only just skimming the surface. Farms are more efficient, but quite a bit more expensive than a house to run day to day.

Clouds (e.g. OpenStack, OpenShift) are more akin to farms than houses. Firing up a cloud is like setting up a farm from scratch. It requires a lot of planning and execution. After firing up your cloud, there is constant technical care and maintenance – e.g. adding/removing storage – fixing hung instances – adding/removing VLANS – fixing pods stuck in a pending state, returning highly available services (Cinder, API nodes, OSE/Kube Master, Hawkular Metrics) back to production, upgrading the cloud platform, etc. etc. There is a lot of farm work with a cloud.

Farms are quite efficient at raising thousands of animals. I do not think, however, that you just tear down an entire farm when it is no longer running in an optimal state – instead – you fix it.  Clouds are quite similar. Clouds are more work for operators, but less work for developers. Just like farms are a lot of work for farms, but much less work for shoppers at the store.  Raising large amounts of chicken is harder for farmers and easier for consumers. The farmers hide the complexity from consumers.

Conclusion

I propose that it’s not really about pets vs. cattle, but really about houses vs. farms. It’s far easier to buy chicken breast at the store than it is to raise hundreds of chickens in your backyard. I propose this as an improved analogy. Farms require quite a bit of work, are sophisticated and more expensive than a house, but quite efficient at supporting a lot more animals. At scale, I would take a farm any day over raising thousands of animals at my house. The same is true with a cloud environment. At scale, a cloud wins every time.

On a side note, people often conflate the notion of scale up and scale out with pets vs. cattle. In my mind, bigger and smaller bulls (scale up/down) or a greater number of smaller bulls (scale out) is arbitrary and a constant challenge in terms of both pets and cattle….

Finally, for those that still don’t like pets vs. cattle or houses vs. farms – let’s try a beer analogy. Bottles vs. home brew – while it’s easy to drop by the store and buy a bottle of beer… it’s way more fun to brew it. Let’s brew some beer together, leave a comment below!

10-FEB Webcast: Wicked Fast Container-Based Apps and Performance Tuning with Atomic Enterprise Platform

In a commissioned study conducted by Forrester Consulting on behalf of Red Hat, 44% of IT professionals identified performance in their top three concerns for adopting container technologies. Benchmarks indicate that containers result in equal or better performance than virtual machines in almost all cases, with the runtime costs of containers as “negligible”.

201511 All Container Slides - Atomic (1)

What are the abstraction costs and what do you need to consider when running container-based applications on Atomic Enterprise Platform Public Preview?

Continue reading “10-FEB Webcast: Wicked Fast Container-Based Apps and Performance Tuning with Atomic Enterprise Platform”

27-JAN Webcast: Using the Atomic Registry for Secure Container Image Management

Icon_RH_Hardware_Monitor-Webinar_RGB_ShinyWhen working with container-based applications, admins and developers need a place to store and share container images, a way to deploy them, as well as a way to monitor and administer them once they’re deployed. Join Red Hat software engineers Aaron Weitekamp and Stef Walter for this webcast, Using the Atomic Registry for Secure Container Image Management, on January 27th at 11:00 ET, to gain a better understanding of sharing, deploying, and managing container images.

Continue reading “27-JAN Webcast: Using the Atomic Registry for Secure Container Image Management”

Schrodinger’s Container: How Red Hat is Building a Better Linux Container Scanner

The rapid rise of Linux containers as an enterprise-ready technology in 2015, thanks in no small part to the technology provided by the Docker project, should come as no surprise: Linux containers offer a broad array of benefits to the enterprise, from greater application portability and scalability to the ability to fully leverage the benefits of composite applications.

But these benefits aside, Linux containers can, if IT security procedures are not followed, also cause serious harm to mission-critical operations. As Red Hat’s Lars Herrmann has pointed out, containers aren’t exactly transparent when it comes to seeing and understanding all of their internal code. This means that tools and technologies to actually see inside a container are critical to enterprises that want to deploy Linux containers in mission-critical scenarios.

Continue reading “Schrodinger’s Container: How Red Hat is Building a Better Linux Container Scanner”

Looking Back on Containers in 2015

Woah.  2015 went by really quickly.  I do suppose it’s not all that surprising as time flies… especially when you’re having fun or… getting older (you pick).  In fact, we’ve already put 2 percent of 2016 behind us!  That said, before we get too deep into “the future”, and in consideration of Janus having not one but two faces, let’s take a quick trip down memory lane…

Without a doubt, 2015 was an exciting year for all things “container”, especially here at Red Hat.

To recap, the year started off with a bang when we announced the general availability of Red Hat Enterprise Linux Atomic Host alongside Red Hat Enterprise Linux 7.1.  Then – less than two months later

Continue reading “Looking Back on Containers in 2015”