The recent release of Red Hat Cloud Suite marked a new milestone for Red Hat and our customers. First, it is the first in what will become a family of suites. Second, it enables enterprise IT to transform their application development and operations toward an agile, innovation center based on hybrid cloud and devops technologies. Curating a broad set of open source technologies, Red Hat Cloud Suite offers a turnkey Cloud solution with a container-based app-development platform, private-cloud infrastructure, and a common management framework. Specifically, Red Hat Cloud Suite includes
In Architecting Containers Part 4: Workload Characteristics and Candidates for Containerization we investigated the level of effort necessary to containerize different types of workloads. In this article I am going to address several challenges facing organizations that are deploying containers – how to patch containers and how to determine which teams are responsible for the container images. Should they be controlled by development or operations?
This year’s SAPPHIRE NOW + ASUG Annual Conference in Orlando, Florida, from May 17-19, 2016, is packed with Red Hat events – happening at our booth, the SAP Mini Theater, and the SAP Demo Theater. We’re also presenting around the show at the Intel, Hitachi Data Systems, HP Enterprise, and Lenovo booths.
We look forward to meeting our community, showcasing our solutions, and highlighting the top companies we’ve been working with. We have excellent customers and partners and are eager to tell you their stories. Plus we’re giving away
One of my favorite things about technology is seeing what’s next. I often find myself asking, “…what’s on the horizon?” Or, better yet, “…what’s beyond the horizon?” In the case of Red Hat Enterprise Virtualization (RHEV), specifically the hypervisor, “next generation node” is hovering in the distance. I anticipate this advance to be significant for both Red Hat partners and customers.
Lately, there has been in increase in IT organizations migrating their traditional virtualization workloads to open-source platforms such as Red Hat Enterprise Virtualization (RHEV) . Although there are many reasons for migrating (e.g. cost, features), one key advantage stands out for the open source alternatives. Organizations are now seeing the viability of building on the same platform to integrate open source cloud solutions with traditional applications. No single platform is optimized for each workload type or tier. Not only do organizations get to take advantage of the fast innovation of open source, but they also realize significant cost savings.
In my Identity Management and Application Integration blog post I talk about how applications can make the most of the identity ecosystem. For example, a number of applications have integrated Apache modules and SSSD to provide a more flexible authentication experience. Despite this progress – some (people) remain unconvinced. They wonder why they should use Apache modules and SSSD in conjunction with, for example, Active Directory instead of using a simple LDAP configuration… essentially asking: why bother?
Many development and operations teams are looking for guidelines to help them determine what applications can be containerized and how difficult it may be. In Architecting Containers Part 3: How the User Space Affects Your Applications we took an in depth look at how the user space affects applications for both developers and operations. In this article we are going to take a look at workload characteristics and the level of effort required to containerize different types of applications.
One of the most compelling features of Red Hat Enterprise Virtualization 3.6 is the ability to hot plug memory. Red Hat Enterprise Virtualization 3.5 provided the ability to hot plug vCPU’s to running virtual machines. Red Hat Enterprise Virtualization 3.6 completes this vision of hot plugging resources on demand.
Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly. CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was
In our previous posts, we’ve explored the Red Hat container ecosystem, the Red Hat Container Development Kit (CDK), OpenShift as a local deployment and OpenShift in production. In this final post of the series, we’re going to take a look at how a team can take advantage of the advanced features of OpenShift in order to automatically move new versions of applications from development to production — a process known as Continuous Delivery (or Continuous Deployment, depending on the level of automation).