Most people don’t consider their average USB memory stick to be a security threat. In fact, in a social engineering experiment conducted in 2016 at the University of Illinois and detailed in this research paper, a group of researchers dropped 297 USB sticks outside in the parking lot, in the hallway, and classrooms. Of the 297 USB sticks dropped,
Continue reading “Built-in protection against USB security attacks with USBGuard”
Watch out San Francisco, and get ready to make your datacenter more secure with Red Hat!
Love (for IT security) will definitely be in the air this Valentine’s week at RSA, where Red Hat will be presenting not only breakout sessions, but also a Birds-of-a-Feather and Peer2Peer Session. To learn more about Red Hat’s sessions at RSA, have a look at the details below.
Continue reading “Red Hat talks security at the 2017 RSA Conference in San Francisco”
A few weeks ago, I wrote a blog on removing capabilities from a container. But what if you want to add capabilities?
While I recommend that people remove capabilities, in certain situations users need to add capabilities in order to get their container to run.
One example is when you have a app that needs a single capability, like an Network Time Protocol (NTP) daemon container that resets the system time on a machine. So if you wanted to run a container for an ntp daemon, you would need to do a
--cap-add SYS_TIME. Sadly, many users don’t think this through, or understand what it means to add a capability.
Continue reading “Container Tidbits: Adding Capabilities to a Container”
Did you know there is an option to drop Linux capabilities in Docker? Using the
docker run --cap-drop option, you can lock down root in a container so that it has limited access within the container. Sadly, almost no one ever tightens the security on a container or anywhere else.
The Day After is Too Late
There’s an unfortunate tendency in IT to think about security too late. People only buy a security system the day after they have been broken into.
Dropping capabilities can be low hanging fruit when it comes to improving container security.
What are Linux Capabilities?
According to the capabilities man page,
capabilities are distinct units of privilege that can be independently enabled or disabled.
The way I describe it is that most people think of root as being all powerful. This isn’t the whole picture, the
root user with all capabilities is all powerful. Capabilities were added to the kernel around 15 or so years ago to try to divide up the power of root.
Continue reading “Secure Your Containers with this One Weird Trick”
We often compare the security of containers to virtual machines and ask ourselves “…which is more secure?” I have argued for a while now that comparing containers to virtual machines is really a false premise – we should instead be comparing containers to
Continue reading “Container Tidbits: The Tenancy Scale”
The Internet of Things (IoT) is gaining steam as businesses across various industries launch projects that instrument, gather, and analyze data to extract value from various connected devices. While the general vision for IoT may be same – each company is pursuing its own unique approach on how to go about it. The adoption of standards and emergence of industry leaders will help the “wild west” situation we’re in but it is still unknown how long it will take to get there. How should businesses implement their IoT solutions in a way that will allow them flexibility and control no matter what the eventual IoT landscape looks like?
It is relatively easy to put together an IoT solution using
Continue reading “IoT in Enterprise: Scaling from Proof of Concept to Deployment”
Red Hat engineers have been working to more securely distribute container images. In this post we look at where we’ve come from, where we need to go, and how we hope to get there.
When the Docker image specification was introduced it did not have a cryptographic verification model. The most significant reason (for not having one) was the lack of a reliable checksum hash of image content. Two otherwise identical images could have different checksum values. Without a consistent tarsum mechanism, cryptographic verification would be very challenging. With Docker version 1.10, checksums are more consistent and could be used as a stable reference for
Continue reading “Container Image Signing”
It’s been a busy few weeks for us on the Atomic Host team, and we’re excited to announce the release of Red Hat Enterprise Linux Atomic Host 7.2.5! This is a big one too. For those not familiar with our release cadence, we release a new version of Atomic Host every six weeks. This enables us to balance the reliability of Red Hat Enterprise Linux with exciting new features and capabilities from our Project Atomic upstream community in a production ready, supportable manor.
Now, let’s walk through some key new features in Atomic Host:
Continue reading “What’s New in Red Hat Enterprise Linux Atomic Host 7.2.5”
Severity analysis of vulnerabilities by experts from the information security industry is rarely based on real code review. In the ‘Badlock’ case, most read our CVE descriptions and built up a score representing a risk this CVE poses to a user. There is nothing wrong with this approach if it is done correctly. CVEs are analyzed in isolation; as if no other issue exists. In the case of a ‘Badlock‘ there were eight CVEs. The difference is the fact that one of them was in a foundational component used by most of the code affected by the remaining seven CVEs. That very specific CVE was
Continue reading “How Badlock Was Discovered and Fixed”
Virtualization technologies have evolved such that support for multiple networks on a single host is a must-have feature. For example, Red Hat Enterprise Virtualization allows administrators to configure multiple NICs using bonding for several networks to allow high throughput or high availability. In this configuration, different networks can be used for connecting virtual machines (using layer 2 Linux bridges) or for other uses such as host storage access (iSCSI, NFS), migration, display (SPICE, VNC), or for virtual machine management. While it is possible to consolidate all of these networks into a single network, separating them into multiple networks enables simplified management, improved security, and an easier way to track errors and/or downtime.
The aforementioned configuration works great but leaves us with a network bottleneck at the host level. All networks compete on the same queue in the NIC / in a bonded configuration and Linux will only enforce a trivial quality of service queuing algorithm, namely: pfifo_fast, which queues side by side, where packets can be enqueued based on their Type of Service bits or assigned priority. One can easily imagine a case where a single network is hogging the outgoing link (e.g. during a migration storm where many virtual machines are being migrated out from the host simultaneously or when there is an attacker VM). The consequences of such cases can include things like lost connectivity to the management engine or lost storage for the host.
A simple solution is to configure
Continue reading “Steps to Optimize Network Quality of Service in Your Data Center”