Getting Started with the Red Hat Container Development Kit (CDK)

Docker containers are used to package software applications into portable, isolated stores. Developing software with containers helps developers create applications that will run the same way on every platform. However modern microservice deployments typically use a scheduler such as Kubernetes to run in production. In order to fully simulate the production environment, developers require a local version of production tools. In the Red Hat stack, this is supplied by the Red Hat Container Development Kit (CDK).

The Red Hat CDK is a customized virtual machine that makes it easy to run complex deployments resembling production. This means complex applications can be developed using production grade tools from the very start

Continue reading “Getting Started with the Red Hat Container Development Kit (CDK)”

Red Hat at RSA Conference 2016

Red Hat will once again have a booth at this year’s RSA Conference. This time, however, we will have a bigger presence and more staff – featuring a number of Red Hat security experts with a variety of backgrounds.  We will be covering not only Identity Management (IdM) but the broader landscape of security related topics. Whether you’re interested in talking about high level security strategy, a vision for adopting IdM at your organization, or are simply seeking practical tips on how to solve specific problems related to risk assessment, governance, compliance, or

Continue reading “Red Hat at RSA Conference 2016”

Container Tidbits: Can Good Supply Chain Hygiene Mitigate Base Image Sizes?

With Docker moving all of their official images to Alpine, base image size is a hot topic.  Sure, having sane and minimal base images is important, but software supply chain hygiene is equally (if not more) important – interested to understand why?

Among other things, it’s important in a production container environment to have provenance (i.e. knowledge of where your container images came from). Using

Continue reading “Container Tidbits: Can Good Supply Chain Hygiene Mitigate Base Image Sizes?”

Back to Blogging: New Identity Management Features in RHEL 7.2

Hello again! I have not had time to blog in awhile. What happened? I picked up some additional responsibilities and these consumed a lot of my time. But now… I am back and will be blogging once again.

Time goes on and there are (many) new topics that are worth sharing with you. The first subject that I want to cover is the new Identity Management (IdM) features in Red Hat Enterprise Linux 7.2. While the release happened nearly three months ago – it’s still worth me providing an overview of new features and functionality. Another subject that people often ask about nowadays is the conversion from 3rd party vendor solutions to the IdM offering from Red Hat. We see a lot of interest in this area and I want to share some hints for when it is a good idea to use what we offer and when it might not be. Finally, there are also some emerging technologies

Continue reading “Back to Blogging: New Identity Management Features in RHEL 7.2”

The Red Hat Ecosystem for Microservice and Container Development

Over the last couple years, microservices and containers have started to redefine the software development landscape. The traditional large Java or C# application has been replaced with multiple smaller components (microservices) which coordinate to provide the required functionality. These microservices typically run inside containers, which provide isolation and portability.

This approach has numerous benefits including being able to scale and replace microservices independently as well as reducing the complexity of individual components. However, it also brings more complexity to the system level; it takes extra effort and tooling to manage and orchestrate the microservices and their interactions.

This post will describe how Red Hat technology and services can be used to develop, deploy and run an effective microservice-based system.

Continue reading “The Red Hat Ecosystem for Microservice and Container Development”

Conversations from the Field: Building a Bridge to the Cloud

Cloud conversations are evolving at a seemingly ever increasing pace. In my experience, nearly all “…what is the cloud?” type conversations have long since past. In fact, for some organizations, private and public clouds are now central to daily business operations. For the both the early and late majority, however, their (usually large) install base of traditional applications makes the cloud far from reality. These organizations tend to have significant investments in proprietary virtualization, management, and operations technologies, and it’s not a given that these applications are cloud ready (today). While many proprietary technology vendors offer re-packaged versions of existing products to create a thin veil of “cloudiness” – this style of cloud enablement usually comes at a heavy price

Continue reading “Conversations from the Field: Building a Bridge to the Cloud”

24-FEB Webcast: Software Defined Networking and More with Red Hat Atomic Enterprise Platform

Containers are increasingly being used to create large platforms for distributed applications that can run in the cloud. One advantage of containers is they can run large, distributed applications with low overhead by sharing a container-optimized operating system (such as Red Hat Enterprise Linux Atomic Host). In order to function properly, container-based infrastructure solutions make use of software-defined networking functionality to connect distributed applications across the Icon_RH_Hardware_Monitor-Webinar_RGB_Shinycloud.

Networking’s role as a part of the stack is as critical for multi-container, multi-host applications as it is for other distributed applications. In this webcast, Red Hat principal software engineer Eric Paris provides you with a better understanding of the networking capabilities available with Red Hat Atomic Enterprise Platform Public Preview.

Continue reading “24-FEB Webcast: Software Defined Networking and More with Red Hat Atomic Enterprise Platform”

Real-Time with Less Downtime: Red Hat Enterprise Linux and SAP HANA

The information demands of today’s organizations are, in a sense, limitless. Access to more data, faster, and in more usable formats is the new standard—and for good reason. This big data holds insights that can let enterprises act more surely and quickly…and even create new opportunities. This model of instant access to information is often referred to as the real-time enterprise.

Just as importantly, IT systems and the information they provide have to be available. High availability (HA) has been an important metric for a long time and has been supported by numerous hardware and software solutions over the years. However, the definition has gotten tighter as the digital, always-on economy has become more pervasive. Hence the term “five-nines,” (i.e. 99.999 percent uptime) has become commonplace.

Real-time is good, it’s cool, and it enables the future. But uptime is a requirement. So one question that business and technical leaders need to answer before moving to a modern database platform is “what are the trade-offs, if any, between real-time and uptime?”

Continue reading “Real-Time with Less Downtime: Red Hat Enterprise Linux and SAP HANA”

Container Tidbits: Does The Pets vs. Cattle Analogy Still Apply?

Background

So, most of us have heard the pets vs. cattle analogy. The saying goes, that in a cloud environment, you don’t waste time fixing individual virtual machines or containers – instead – you just delete them and re-provision. But, does this apply to the entire cloud environment? The analogy is that you don’t take cattle to the vet, you just send them to slaughter. But, is this really true? Cattle are worth a lot of money. I have never really liked the pets vs. cattle analogy.  I think it lacks sensitivity and may not be appropriate when talking to a CIO.  The real problem, however, is that the analogy fails to fully relate the changes in IT that are happening.

I propose that Pets vs. cattle is not really about how or when we kill animals – instead it’s about the simplicity of consuming animals, and the complexity of maintaining the environment in which they live.

Pets

At the end of the day – in small quantities, pets are actually quite easy to take care of – when they are young, you take them to the vet for their shots. As they grow, you provide them with food, water, and a clean litter box (or take them outside once in awhile) and they are pretty much “good to go”.

Like pets, you give traditional virtual machines their “shots” when they are first created (via Puppet, Chef, Ansible, or through manual updates) and they are pretty much “good to go”.  Of course, if they get “sick”, you take virtual machines to “the vet” – you log into them, troubleshoot problems, fix problems, or run update scripts. Usually by hand, or driven by some automation, but managed individually.

The problem is, raising pets in a house doesn’t scale. I don’t want 2000 cats and dogs at my house (and, let’s be honest, neither do you).

Cattle

Raising cattle is quite different than a household pet. It’s actually quite a bit more complex. Cows, sheep, and chickens are raised on farms because it’s more efficient. Farms are set up to handle the scale. This requires large amounts of land, tractors, fences, silos for grain/feed, specialized trailers for your truck, specialized train cars, and specialized processing plants. In addition, farms have to keep shifting which fields are used for grazing so that they don’t become unusable over time.  If you really think about it – I’m only just skimming the surface. Farms are more efficient, but quite a bit more expensive than a house to run day to day.

Clouds (e.g. OpenStack, OpenShift) are more akin to farms than houses. Firing up a cloud is like setting up a farm from scratch. It requires a lot of planning and execution. After firing up your cloud, there is constant technical care and maintenance – e.g. adding/removing storage – fixing hung instances – adding/removing VLANS – fixing pods stuck in a pending state, returning highly available services (Cinder, API nodes, OSE/Kube Master, Hawkular Metrics) back to production, upgrading the cloud platform, etc. etc. There is a lot of farm work with a cloud.

Farms are quite efficient at raising thousands of animals. I do not think, however, that you just tear down an entire farm when it is no longer running in an optimal state – instead – you fix it.  Clouds are quite similar. Clouds are more work for operators, but less work for developers. Just like farms are a lot of work for farms, but much less work for shoppers at the store.  Raising large amounts of chicken is harder for farmers and easier for consumers. The farmers hide the complexity from consumers.

Conclusion

I propose that it’s not really about pets vs. cattle, but really about houses vs. farms. It’s far easier to buy chicken breast at the store than it is to raise hundreds of chickens in your backyard. I propose this as an improved analogy. Farms require quite a bit of work, are sophisticated and more expensive than a house, but quite efficient at supporting a lot more animals. At scale, I would take a farm any day over raising thousands of animals at my house. The same is true with a cloud environment. At scale, a cloud wins every time.

On a side note, people often conflate the notion of scale up and scale out with pets vs. cattle. In my mind, bigger and smaller bulls (scale up/down) or a greater number of smaller bulls (scale out) is arbitrary and a constant challenge in terms of both pets and cattle….

Finally, for those that still don’t like pets vs. cattle or houses vs. farms – let’s try a beer analogy. Bottles vs. home brew – while it’s easy to drop by the store and buy a bottle of beer… it’s way more fun to brew it. Let’s brew some beer together, leave a comment below!

Top 5 Skills Virtualization Admins Must Have to Stay Relevant in 2016

In the past few years, virtualization admins have been hailed as heroes for enabling their organizations to significantly slash costs while improving service levels to the lines of business. Since the IT industry is constantly evolving, how can virtualization admins position themselves for success and avoid being rubber ducks in 2016? Below, we will look at 5 skills that should be in your toolkit in order to remain relevant in your organization.

1. Develop a deep understanding of how DevOps fits into your organization

According to Gartner, “By 2016, DevOps will evolve from a niche strategy employed by large cloud providers to a mainstream strategy employed by 25 percent of Global 2000 organizations.” Like many people, you might be asking yourself – what exactly does DevOps mean? DevOps is

Continue reading “Top 5 Skills Virtualization Admins Must Have to Stay Relevant in 2016”