Cloud

Immutable Infrastructure – Myth or Must?

Bart Driscoll By Bart Driscoll Global Innovation Lead – Digital, Dell Technologies Consulting August 16, 2016

I am as guilty as the next person regarding slinging techno-jargon. In just the last week, I swear I have used or heard used the term “immutable infrastructure” in the context of DevOps, IaaS, and PaaS at least five times.  So what does this mean?  Why should an enterprise care?  What are some of the challenges of moving to an immutable infrastructure?

Well before tackling all these questions, let’s agree on the definition of immutable infrastructure: The phrase “immutable infrastructure” was first coined by Chad Fowler in his 2013 blog, “Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components”. The model borrows from the programming concept of immutability that states that once an object is created, its state must remain unchanged.  In other words, the only way to make a change to an immutable piece of infrastructure (server, container, component, etc.), is to replace the old version with a new, updated version.

On the surface, the paradigm of immutability sounds like a dream. Production systems are no longer contaminated by ad hoc, out-of-cycle patches, bug fixes, and updates.  Access and credentialing can be simplified since locked systems don’t need developers and testers with root access.  And finally, the configuration management database (CMDB) will finally be a source of truth for enterprises enabling teams to easily replicate and recover production since change doesn’t “just happen” especially when no one is watching.  The benefits of immutability in infrastructure are clear.  So why don’t more enterprise employ this best practice?

Well, it is because adopting this practice requires two critical and circularly dependent practices to be in place:

Enterprise must stop associating value with preserving artisanal infrastructures.

Effort spent creating and maintaining fleets of unique, un-reproducible servers in a data center is anachronistic to an enterprise’s goal of digital transformation. There is much evidence to suggest that these traditional practices of supporting and maintaining long lived (aka. mutable) servers and components increases operational complexity and risk and results in slower and lower quality deployments.
To transition to an immutable infrastructure operating model, enterprises must create a system that enables all changes (infrastructure and application) to be created, tested, and packaged outside of PRODUCTION. Simple changes like patching and OS or deploying a bug fix to an application must move through a delivery pipeline before it is introduced into PRODUCTION.

This delivery pipeline packages the outputs, or artifacts, needed to deploy the change from scratch. It then verifies and validates that the change can run successfully in the data center before promoting it up. As a change nears this promotion point, a new instance is created in PRODUCTION representing the updated/changed server or component. Traffic is then routed to the new instance; and the old version is deprecated and ultimately recycled. This is basically blue/green deployment.

Making this leap to immutable infrastructure and changing the value system, CIOs need a trusted automation platform managed by policy-driven workflows to replace the care and feeding activities of tens and hundreds of expert system administrators and release engineers. Without an automated, end-to-end delivery pipeline for infrastructure, CIOs and their respective teams will not be able to make the transition. This leads to the second practice, namely continuous delivery for infrastructure.

Enterprises require fully automated, end-to-end pipelines to manage the creation, promotion, and deployment of runtime environments.

Without automated pipelines, the cost to introduce a change into PRODUCTION tends to exceed the expected return-on-investment as well as the personal commitment often required to successfully deploy that change. As such, updates and revisions to infrastructure typically get delayed and or bundled into high risk, complex deployments that rely heavily on deep, greying SMEs, system down time, and weekend, death-marches.
The heavily manual practices are repeated to ensure that the artisanal and fragile infrastructures do not fail.  These practices are not designed or intended to support change- rather they are almost coping mechanism to slow change in response to painful past failures in the hope they are not repeated. In contrast, immutable infrastructure practices and the required automation are specifically designed for and intended to manage change.  Automation enables updates and innovations to be quickly created and tested.  While the development and delivery pipeline manages these changes are they are systematically promoted into PRODUCTION.

This automated pipeline isn’t just a few scripts to deploy a base system or services rather it is a fully integrated and orchestrated collection of tools and scripts fabricated to generate value (aka. changes in PRODUCTION).  Where possible, all infrastructure configurations are defined by parameterized code that can adapt to specific workloads and/or applications running on them.  This eliminates the need of experts to hand-craft a deployment.  Furthermore, this enables your infrastructure to be fully auditable and your changes to be traceable.  (Can you imagine your world existing with no one pointing fingers any more?)

Embracing this level of automation and transparency into an enterprise is very possible since much of the needed expertise is already in house.  What tends to be missing is a vision of what is possible, a design pattern for continuous delivery tooling, and most importantly a strong commitment to change.  This commitment to change is often dwarfed by existing stressors caused by work/rework associated with maintaining and managing our legacy, artisanal environments.

Transitioning to an immutable infrastructure model is a journey and should be viewed as such. It demands intention and discipline to build resilient systems and platform.  It rarely happens by accident and will not scale without proper commitment.  The benefits of greater agility, speed, stability, and resiliency are clear.  Are you ready to take on your infrastructure or will immutability remain a myth in your organization?

To learn more about EMC can help you in this transition, please contact us at devops@emc.com.

Bart Driscoll

About Bart Driscoll


Global Innovation Lead – Digital, Dell Technologies Consulting

Bart Driscoll is the Global Innovation Lead for Digital Services at Dell Technologies. This practice delivers a full spectrum of platform, data, application, and operations related services that help our clients navigate through the complexities and challenges of modernizing legacy portfolios, implementing continuous delivery systems, and adopting lean devops and agile practices. Bart’s passion for lean, collaborative systems combined with his tactical, action-oriented focus has helped Dell Technologies partner with some of the largest financial services and healthcare companies to begin the journey of digital transformation.

Bart has broad experience in IT ranging from networking engineering to help desk management to application development and testing. He has spent the last 22 years honing his application development and delivery skills in roles such as Information Architect, Release Manager, Test Manager, Agile Coach, Architect, and Project/Program Manager. Bart has held certifications from PMI, Agile Alliance, Pegasystems, and Six Sigma.

Bart earned a bachelor’s degree from the College of the Holy Cross and a master’s degree from the University of Virginia.

Read More

Share this Story
Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *