Multi-Cloud

Immutable Infrastructure – Myth or Must?

Bart Driscoll By Bart Driscoll Global Innovation Lead – Digital, Dell Technologies Consulting August 16, 2016

I am as guilty as the next person regarding slinging techno-jargon. In just the last week, I swear I have used or heard used the term “immutable infrastructure” in the context of DevOps, IaaS, and PaaS at least five times.  So what does this mean?  Why should an enterprise care?  What are some of the challenges of moving to an immutable infrastructure?

Well before tackling all these questions, let’s agree on the definition of immutable infrastructure: The phrase “immutable infrastructure” was first coined by Chad Fowler in his 2013 blog, “Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components”. The model borrows from the programming concept of immutability that states that once an object is created, its state must remain unchanged.  In other words, the only way to make a change to an immutable piece of infrastructure (server, container, component, etc.), is to replace the old version with a new, updated version.

On the surface, the paradigm of immutability sounds like a dream. Production systems are no longer contaminated by ad hoc, out-of-cycle patches, bug fixes, and updates.  Access and credentialing can be simplified since locked systems don’t need developers and testers with root access.  And finally, the configuration management database (CMDB) will finally be a source of truth for enterprises enabling teams to easily replicate and recover production since change doesn’t “just happen” especially when no one is watching.  The benefits of immutability in infrastructure are clear.  So why don’t more enterprise employ this best practice?

Well, it is because adopting this practice requires two critical and circularly dependent practices to be in place:

Enterprise must stop associating value with preserving artisanal infrastructures.

Effort spent creating and maintaining fleets of unique, un-reproducible servers in a data center is anachronistic to an enterprise’s goal of digital transformation. There is much evidence to suggest that these traditional practices of supporting and maintaining long lived (aka. mutable) servers and components increases operational complexity and risk and results in slower and lower quality deployments.
To transition to an immutable infrastructure operating model, enterprises must create a system that enables all changes (infrastructure and application) to be created, tested, and packaged outside of PRODUCTION. Simple changes like patching and OS or deploying a bug fix to an application must move through a delivery pipeline before it is introduced into PRODUCTION.

This delivery pipeline packages the outputs, or artifacts, needed to deploy the change from scratch. It then verifies and validates that the change can run successfully in the data center before promoting it up. As a change nears this promotion point, a new instance is created in PRODUCTION representing the updated/changed server or component. Traffic is then routed to the new instance; and the old version is deprecated and ultimately recycled. This is basically blue/green deployment.

Making this leap to immutable infrastructure and changing the value system, CIOs need a trusted automation platform managed by policy-driven workflows to replace the care and feeding activities of tens and hundreds of expert system administrators and release engineers. Without an automated, end-to-end delivery pipeline for infrastructure, CIOs and their respective teams will not be able to make the transition. This leads to the second practice, namely continuous delivery for infrastructure.

Enterprises require fully automated, end-to-end pipelines to manage the creation, promotion, and deployment of runtime environments.

Without automated pipelines, the cost to introduce a change into PRODUCTION tends to exceed the expected return-on-investment as well as the personal commitment often required to successfully deploy that change. As such, updates and revisions to infrastructure typically get delayed and or bundled into high risk, complex deployments that rely heavily on deep, greying SMEs, system down time, and weekend, death-marches.
The heavily manual practices are repeated to ensure that the artisanal and fragile infrastructures do not fail.  These practices are not designed or intended to support change- rather they are almost coping mechanism to slow change in response to painful past failures in the hope they are not repeated. In contrast, immutable infrastructure practices and the required automation are specifically designed for and intended to manage change.  Automation enables updates and innovations to be quickly created and tested.  While the development and delivery pipeline manages these changes are they are systematically promoted into PRODUCTION.

This automated pipeline isn’t just a few scripts to deploy a base system or services rather it is a fully integrated and orchestrated collection of tools and scripts fabricated to generate value (aka. changes in PRODUCTION).  Where possible, all infrastructure configurations are defined by parameterized code that can adapt to specific workloads and/or applications running on them.  This eliminates the need of experts to hand-craft a deployment.  Furthermore, this enables your infrastructure to be fully auditable and your changes to be traceable.  (Can you imagine your world existing with no one pointing fingers any more?)

Embracing this level of automation and transparency into an enterprise is very possible since much of the needed expertise is already in house.  What tends to be missing is a vision of what is possible, a design pattern for continuous delivery tooling, and most importantly a strong commitment to change.  This commitment to change is often dwarfed by existing stressors caused by work/rework associated with maintaining and managing our legacy, artisanal environments.

Transitioning to an immutable infrastructure model is a journey and should be viewed as such. It demands intention and discipline to build resilient systems and platform.  It rarely happens by accident and will not scale without proper commitment.  The benefits of greater agility, speed, stability, and resiliency are clear.  Are you ready to take on your infrastructure or will immutability remain a myth in your organization?

To learn more about EMC can help you in this transition, please contact us at devops@emc.com.

Bart Driscoll

About Bart Driscoll


Global Innovation Lead – Digital, Dell Technologies Consulting

Bart Driscoll is the Global Innovation Lead for Digital Services at Dell Technologies. This practice delivers a full spectrum of platform, data, application, and operations related services that help our clients navigate through the complexities and challenges of modernizing legacy portfolios, implementing continuous delivery systems, and adopting lean devops and agile practices. Bart’s passion for lean, collaborative systems combined with his tactical, action-oriented focus has helped Dell Technologies partner with some of the largest financial services and healthcare companies to begin the journey of digital transformation.

Bart has broad experience in IT ranging from networking engineering to help desk management to application development and testing. He has spent the last 22 years honing his application development and delivery skills in roles such as Information Architect, Release Manager, Test Manager, Agile Coach, Architect, and Project/Program Manager. Bart has held certifications from PMI, Agile Alliance, Pegasystems, and Six Sigma.

Bart earned a bachelor’s degree from the College of the Holy Cross and a master’s degree from the University of Virginia.

Read More

Share this Story
Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *

2 thoughts on “Immutable Infrastructure – Myth or Must?

  1. How do you deal with immutable infrastructures when it comes to development environments? It seems to me that you can’t eliminate capabilities like SSH or even changing the configuration of the systems in a development environment since that is it’s purpose. I can see that the infrastructure as code the provisions the development environment could be maintained and attempted to be kept as current as possible. However, to build a system requires experimenting with code, trying things, compiling, trying out system configuration changes, etc. I would think the approach would be that once you get out of the development environment and graduate through operational environments (development, test, user acceptance, production) systems would be more and more locked down. In addition, once out of the development environment, provisioning of the infrastructure and associated developmental software would be fully automated. However, it seems to me that the Development environment is a special case.

  2. @Greg – Thanks for the question. I agree with your comment, ‘as you move closer to Production “the systems would be more and more locked down”’ although I would suggest that the faster you can “lock down” a system (ideally in TEST), the better. By locking a system configuration early, you will have more opportunities to verify and validate the deployment. Following this model, when you get to Production, you will be confident your deployment automation and the configurations will work. But your question was more focused on how to you introduce change and experiment prior to production.
    To your points, you can’t be fully immutable in DEV otherwise you can’t introduce change, which defeats the purpose of DEV in the first place. That said, you can introduce immutability into DEV using version controlled, code-based automation and a ‘microservice’ pattern. The goal of these practices is isolate the change and test a discrete (atomic) change against a known-good state. By segmenting your automation code (typically by layers and SW packages), you can isolate change. Once your single change is proven, you then update your composite, known good state and introduce another change. In practice, this means you would start with pre-baked infrastructure and application deployment code and a standard configuration file that is stored in a centrally managed and shareable repository. Using a version-control tool like Git, you pull the proven artifact(s) into your DEV environment. Rather than interactively connecting via SSH into a box and using the command line to manually change or set configurations, you instead update the configuration file and automation code associated with your specific application and stack. Once your configuration changes have been made, you then rerun the deployment automation using these updated files and test the changes to verify and validate the results. If the changes pass testing, they are checked-in to version control (e.g. Git) and become the new working files for the project.
    While this process is a little longer than manually tweaking a config in DEV, it protects you from having to document and remember what you changed. Additionally, your changes will be stored in version control and based on proven enterprise standards. This gives you a complete audit trail and history which helps in the future when debugging or presenting to Advisory Boards.
    This isn’t easy and it is a HUGE departure from how system engineers and developers typically work in many of the large (and small) organizations we work with. More often than not, many organizations focus on the tooling rather than the practices and processes. While implementing and integrating the tools can be challenging, getting teams to use the tools and more importantly use them well is far more complicated. At Dell Tech, we have built a services team that focuses exclusively on helping our clients build these capabilities (and tool chains) internally. If you would like to learn more about what we do, visit our web page at http://www.dellemc.com/devops