Frustrated by long delays getting new code into production? Worried that developers are adopting unapproved technologies? In an increasingly automated, containerized world it’s time to adapt your processes and policies so that developers can utilize the latest and most appropriate technology — and operations have full awareness of everything running in their environment.
The Problem and How to Solve It
IT processes, driven by business reliance on Mode 1 applications, have not been designed nor are equipped to handle rapid change. This creates friction between management and operations on one side, and developers on the other. For example, when developer teams want to employ new or different tools than the standards accepted by operations it often creates friction. It doesn’t have to be this way, though.
In this post, we are going to take a deeper look at the collaboration that happens between development and operations when building and working with the “latest and greatest” technology stack.
Continue reading “Peace in Our Time: Bringing Ops and Dev Together”
The concept to save (i.e. checkpoint / dump) the state of a process, at a certain point in time, so that it may later be used to restore / restart the process (to the exact same state) has existed for many years. One of the most prominent motivations to develop and support checkpoint/restore functionality was to provide improved fault tolerance. For example, checkpoint/restore allows for processes to be restored from previously created checkpoints if, for one reason or another, these processes had been aborted.
Over the years there have been several different implementations of checkpoint/restore for Linux. Existing implementations of checkpoint/restore differ in terms of “what level” (of the operating system) they are operating; the lowest level approaches focus on implementing checkpoint/restore directly in the kernel while other “higher level” approaches implement checkpoint/restore completely in user-space. While it would be difficult to unearth each and every approach / implementation – it is likely fair to
Continue reading “From Checkpoint/Restore to Container Migration”
This post is the fifth installment in my PCI DSS series – a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement six (i.e. the requirement to develop and maintain secure systems and applications). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.
Section six of the PCI DSS standard covers guidelines related to secure application development and testing. IdM and its ecosystem can help in multiple ways to address requirements in this part of the PCI-DSS standard. First of all, IdM includes a set of Apache modules for
Continue reading “PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications”
Red Hat has been a technology industry leader for many years. We are not just creators of innovative open source technologies, but we are also consumers of our own technologies. At Red Hat, nearly all of our core IT infrastructure runs on Red Hat Virtualization. From our development environment all the way to production. Several of our mission critical applications are powered by Red Hat Virtualization including our email systems, identity management, subscription manager, customer service portal, and many more applications. Since we are a global company, we have deployed thousands of VMs that need to be up and running 24/7, and we chose Red Hat Virtualization to get the job done.
Continue reading “Red Hat Keeps the Lights on with Red Hat Virtualization”
Do you have questions about how Red Hat JBoss Middleware fits into your SAP landscape? Or how to configure your High Availability environment? How to get the most out of your Red Hat Enterprise Linux for SAP Applications or Red Hat Enterprise Linux for SAP HANA subscriptions? Stop by Booth 611 at SAP TechEd 2016.
We’ll also be hosting
Continue reading “Meet and Greet Red Hat Experts at SAP TechEd”
As applications are designed, redesigned, or even simply thought about at a high level, we frequently think about technical barriers along side business needs. Business needs may dictate that a new architecture move forward, but technical limitations can sometimes counter how far forward – unless there is something to bridge the gap. The new Neutron network integration between Red Hat Virtualization (RHV) and Red Hat OpenStack Platform (RHOSP) provides such a bridge for business and technical solutions.
Continue reading “Integrating Red Hat Virtualization and Red Hat OpenStack Platform with Neutron Networking”
This past August, Red Hat announced the availability of Red Hat Virtualization 4.0, the latest virtualization release that aims to help IT organizations modernize their infrastructure, enhance their virtualization management and automation, and deploy advanced networking functionality. As a Software Engineer, I know that releases are exciting and early adopter customers eagerly await for the opportunity to deploy the latest features. However, the the upgrade process has not always been seamless. Through my work with the Customer Support Team, we have been exploring tools to streamline and simplify the upgrade process.
Continue reading “Upgrade Your Red Hat Virtualization Environment with a Simple Tool”
What if I told you that you can have your Red Hat Enterprise Linux (RHEL) based Cloud infrastructure, with Red Hat Virtualization, OpenStack, OpenShift and CloudForms all setup before you have to stop for lunch?
Would you be surprised?
Could you do that today?
In most cases I am betting your answer would be not possible, not even on your best day. Not to worry, the solution is here and it’s called the QuickStart Cloud Installer (QCI).
Welcome to another post dedicated to the use of Identity Management (IdM) and related technologies in addressing the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement three (i.e. the requirement to protect stored cardholder data). In case you’re new to the series – the outline and mapping of individual articles to the requirements can be found in the overarching post that started the series.
Section three of the PCI DSS standard talks about storing cardholder data in a secure way. One of the technologies that can be used for secure storage of cardholder data is