If you have ever wanted to learn about Red Hat’s involvement in the ARM server ecosystem, and are in the San Francisco Bay Area, this week may be a perfect opportunity. Red Hat will be exhibiting at ARM TechCon, ARM Holdings’ premier yearly show at the Santa Clara Convention center. Attendees will be presented with a variety of great technical sessions and training topics, along with expert keynotes, solutions-based Expo Theater sessions and an expo floor filled with new and emerging technologies for the datacenter. Note that the expo floor can be accessed with the free
Continue reading “Arm in Arm: Explore Enterprise Server Options at ARM’s Annual Technical Conference”
The usage of open source technologies has grown significantly in the public sector. In fact, according to a published memo, open source technologies allow the Department of Defense to “develop and update its software-based capabilities faster than ever, to anticipate new threats and respond to continuously changing requirements”. Cybersecurity threats are on the rise and organizations need to ensure that the software they use in their environments is safe. IT teams need the ability to quickly identify and mitigate breaches. They also need to deploy preventative measures and ensure that all stakeholders are protected.
Continue reading “Red Hat Virtualization and Security”
Linux containers, and their use in the enterprise, are evolving rapidly. If I didn’t know this already, what I’m seeing at conferences like ContainerCon would confirm it. We’ve moved on from “what are containers, anyway?” to “let’s hunker down and get it right.”
Recently, I attended and spoke at LinuxCon/ContainerCon Europe. Like LinuxCon/ContainerCon North America, many of the keynotes touched on Linux container work going on in the community. At the European edition there was a particularly strong focus on Linux container security and networking. At least six sessions were focused on kernel security, orchestration security, and general container security. Four talks focused on container networking. Along with container security and networking, there were a lot of sessions about cloud native and containerized applications.
Continue reading “Evolution of Containers: Lessons Learned at ContainerCon Europe”
Did you know there is an option to drop Linux capabilities in Docker? Using the
docker run --cap-drop option, you can lock down root in a container so that it has limited access within the container. Sadly, almost no one ever tightens the security on a container or anywhere else.
The Day After is Too Late
There’s an unfortunate tendency in IT to think about security too late. People only buy a security system the day after they have been broken into.
Dropping capabilities can be low hanging fruit when it comes to improving container security.
What are Linux Capabilities?
According to the capabilities man page,
capabilities are distinct units of privilege that can be independently enabled or disabled.
The way I describe it is that most people think of root as being all powerful. This isn’t the whole picture, the
root user with all capabilities is all powerful. Capabilities were added to the kernel around 15 or so years ago to try to divide up the power of root.
Continue reading “Secure Your Containers with this One Weird Trick”
Linaro has announced a new project focused on IoT – LITE, or Linaro IoT and Embedded. This project will focus on developing core technology to be used in IoT devices and gateways.
Linaro is a consortium focused on the Linux ecosystem for ARM based systems — see www.linaro.org for details. Much of their work to date has been focused on Android phones and tablets. Active development efforts include server and networking as well as Digital Home. The Digital Home project focuses on set-top boxes and home gateways. Linaro’s goal is to avoid fragmentation of the ARM ecosystem by providing a common foundation that can be used to build a wide range of value-added applications.
LITE extends existing Linaro projects by addressing both
Continue reading “ARMing IoT with Linaro LITE”
I attended the KVM Forum in August of this year, and as a new Red Hatter with a lot of VMware experience, it was eye opening. I am seeing a lot of interest in Red Hat Virtualization from my customers, and so I wanted to understand the platform at a much deeper level. KVM is the technology that underpins the Red Hat Virtualization platform. A number of themes emerged for me as I attended sessions and enjoyed the hallway track. This was a forum for developers by developers, so infrastructure types like myself were far and few between but that did not impact my enjoyment of the conference. In fact, as a technical guy coming from a server background, I learned a lot more than I would have learned from a typical infrastructure focused conference. Below, I will highlight some topics that stood out to me.
Continue reading “Notes from the Field – A Summary of the KVM Forum”
Frustrated by long delays getting new code into production? Worried that developers are adopting unapproved technologies? In an increasingly automated, containerized world it’s time to adapt your processes and policies so that developers can utilize the latest and most appropriate technology — and operations have full awareness of everything running in their environment.
The Problem and How to Solve It
IT processes, driven by business reliance on Mode 1 applications, have not been designed nor are equipped to handle rapid change. This creates friction between management and operations on one side, and developers on the other. For example, when developer teams want to employ new or different tools than the standards accepted by operations it often creates friction. It doesn’t have to be this way, though.
In this post, we are going to take a deeper look at the collaboration that happens between development and operations when building and working with the “latest and greatest” technology stack.
Continue reading “Peace in Our Time: Bringing Ops and Dev Together”
The concept to save (i.e. checkpoint / dump) the state of a process, at a certain point in time, so that it may later be used to restore / restart the process (to the exact same state) has existed for many years. One of the most prominent motivations to develop and support checkpoint/restore functionality was to provide improved fault tolerance. For example, checkpoint/restore allows for processes to be restored from previously created checkpoints if, for one reason or another, these processes had been aborted.
Over the years there have been several different implementations of checkpoint/restore for Linux. Existing implementations of checkpoint/restore differ in terms of “what level” (of the operating system) they are operating; the lowest level approaches focus on implementing checkpoint/restore directly in the kernel while other “higher level” approaches implement checkpoint/restore completely in user-space. While it would be difficult to unearth each and every approach / implementation – it is likely fair to
Continue reading “From Checkpoint/Restore to Container Migration”
This post is the fifth installment in my PCI DSS series – a series dedicated to the use of Identity Management (IdM) and related technologies to address the Payment Card Industry Data Security Standard (PCI DSS). This specific post is related to requirement six (i.e. the requirement to develop and maintain secure systems and applications). The outline and mapping of individual articles to requirements can be found in the overarching post that started the series.
Section six of the PCI DSS standard covers guidelines related to secure application development and testing. IdM and its ecosystem can help in multiple ways to address requirements in this part of the PCI-DSS standard. First of all, IdM includes a set of Apache modules for
Continue reading “PCI Series: Requirement 6 – Develop and Maintain Secure Systems and Applications”
Red Hat has been a technology industry leader for many years. We are not just creators of innovative open source technologies, but we are also consumers of our own technologies. At Red Hat, nearly all of our core IT infrastructure runs on Red Hat Virtualization. From our development environment all the way to production. Several of our mission critical applications are powered by Red Hat Virtualization including our email systems, identity management, subscription manager, customer service portal, and many more applications. Since we are a global company, we have deployed thousands of VMs that need to be up and running 24/7, and we chose Red Hat Virtualization to get the job done.
Continue reading “Red Hat Keeps the Lights on with Red Hat Virtualization”