In my last post I reviewed some of my observations from the RSA Security Conference. As mentioned, I enjoyed the opportunity to speak with conference attendees about Red Hat’s Identity Management (IdM) offerings. That said, I was quick to note that whether I’m out-and-about staffing an event or “back home” answering e-mails – one of the most frequently asked questions I receive goes something like this: “…I’m roughly familiar with both direct and indirect integration options… and I’ve read some of the respective ‘pros’ and ‘cons’… but I’m still not sure which approach to use… what should I do?” If you’ve ever asked a similar question – I have some good news – today’s post will help you to determine which option aligns best with your current (and future) needs.
Continue reading “Direct, or Indirect, that is the Question…”
As many specialists in the security world know – the RSA Security Conference is one of the biggest security conferences in North America. This year it was once again held in San Francisco at the Moscone Center. Every year the conference gets bigger and bigger, bringing in more and more people and companies from all over the world.
If you attended – you may have noticed that Red Hat had a booth this year. Located in the corner of the main expo floor (not far from some of the “big guys” like: IBM, Microsoft, EMC, CA Technologies, and Oracle) we were in a great location – receiving no shortage of traffic. In fact, despite staffing the booth with six Red Hatters we didn’t have any “down time” – everyone seemed to be interested in what Red Hat has to offer in security.
Over the course of the conference I made a few interesting observations…
Continue reading “RSA Security Conference 2015 in Review: Three Observations”
In a recent blog post on the appc spec, I mentioned Project Atomic’s evolving Nulecule [pronounced: noo-le-kyul] spec as an attempt to move beyond the current limitations of the container model. Let’s dig a bit deeper into that.
Continue reading “The Atomic App Concept…”It All Starts When a Nulecule Comes Out of its Nest””
It Started with Developers
Developers were the first adopters of containers for application creation. Now that containers have made their way into production environments, operations teams are starting to look deeper at what benefit they bring. Deployments are a key focus not just because the container model is so different, but also because there are automation integration points that have been previously unavailable.
Release engineers are faced with a tough question: continue to do rolling style updates as they always have or move to a red/black deployment model. Both have their pros and cons but using containers with red/black deployment methods provides
Continue reading “Stop Gambling with Upgrades, Murphy’s Law Always Wins”
With every new Intel Xeon processor generation, the benefits typically span beyond simple increases in transistor counts or the number of cores within each processor. Things like increased memory capacity per chip or larger on-chip caches are tangible and measurable, and often have a direct effect on performance, resulting in record-breaking scores on various standard benchmarks.
There is, however, more to every new processor family launch than meets the eye. These new chips often send a ripple of innovation throughout our ecosystem of partners, forcing them to re-evaluate and re-visit existing performance results and break the status quo. The ability to support these partners is of paramount importance to Red Hat and, as a result, Red Hat Enterprise Linux is often being selected by our partners to support their ongoing benchmarking efforts.
Yesterday, Intel launched the Intel Xeon E7 v3 processor family with several new world record industry-standard benchmarks. Red Hat Enterprise Linux was used in nearly one-third of all results. The following table captures these leading results
Continue reading “Red Hat Delivers Leading Application Performance with the Latest Intel Xeon Processors”
At this week’s CoreOS Fest in San Francisco, CoreOS is – unsurprisingly – pushing hard on the Application Container Spec (appc) and its first implementation, rkt, making it the topic of the first session after the keynote and a bold story about broad adoption.
When making technology decisions, Red Hat continuously evaluates all available options with the goal of selecting the best technologies that are supported by upstream communities. This is why Red Hat is engaging upstream in appc to actively contribute to the specification.
Red Hat engages in many upstream communities. However, this engagement should not imply full support, or that we consider appc or rkt ready for
Continue reading “rkt, appc, and Docker: A Take on the Linux Container Upstream”
Linux containers have been getting a lot of hype recently, and it’s easy to understand why. Delivering applications to meet the demands of the business is challenging and containers are disrupting traditional application development and deployment models, enabling businesses to explore new, better ways to deliver products and services.
New innovations like the Docker image format and Kubernetes give you a simpler way to quickly create, package, assemble, and distribute applications. But with hype comes misunderstandings and misconceptions.
Join Red Hat and Cisco tomorrow, May 5, 2015 at 11:00 AM ET / 8:00 AM PT for the webcast, Top 6 Misconceptions about Linux Containers, to gain clarity around these misconceptions. In the webcast, you will:
- Gain a pragmatic look at Linux containers.
- Understand what benefits containers can deliver for you.
- Discover what security, implementation, and other considerations you should understand before your organization embraces this technology.
If you haven’t already done so, register today.