Bart Driscoll – InFocus Blog | Dell EMC Services https://infocus.dellemc.com DELL EMC Global Services Blog Tue, 07 Aug 2018 19:04:52 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.7 Multicloud Data Centers and Going D.I.G.I.T.A.L? (Do I Give it to Amazon then Leave?) https://infocus.dellemc.com/bart_driscoll/implementing-multi-cloud-data-center/ https://infocus.dellemc.com/bart_driscoll/implementing-multi-cloud-data-center/#respond Wed, 18 Apr 2018 09:00:38 +0000 https://infocus.dellemc.com/?p=34844 Gosh, I hope the answer is no. But, I have to admit that does appear to be the big, red easy button that many CIOs around the world are exploring. It isn’t just Amazon, it’s Google, Azure, Virtustream, and others. Given the short tenure of C-suite execs, many CIOs are feeling the pressure to outsource IT to […]

The post Multicloud Data Centers and Going D.I.G.I.T.A.L? (Do I Give it to Amazon then Leave?) appeared first on InFocus Blog | Dell EMC Services.

]]>
Gosh, I hope the answer is no.

But, I have to admit that does appear to be the big, red easy button that many CIOs around the world are exploring. It isn’t just Amazon, it’s Google, Azure, Virtustream, and others. Given the short tenure of C-suite execs, many CIOs are feeling the pressure to outsource IT to the cloud in order to meet more and more aggressive SLO’s, cost cutting demands, and most importantly innovation. A few years ago, we called this phenomena “shadow IT.” Today, the reality of multicloud data centers is becoming more and more the norm. It isn’t happening in the shadow (as much), it’s becoming a conscious and informed decision of the business and IT together to hopefully deliver on the promise and value of digital transformation.

So Does the Multicloud Model Make Sense?

From a purely economics perspective, multi-cloud doesn’t make sense. Assuming you have a well-run IT shop and are effectively automating all routine data center activities like patch management, scaling, and monitoring, it is more cost effective to keep your data center on-premises. If it wasn’t, Amazon and others wouldn’t be in the market. But most IT shops are not well-oiled machines, they are laden with technical debt; they have inefficient, often manual processes; and, their investment in automation to date has fallen short of expectations. In this type of environment, a multicloud platform can be the difference between success and failure.

Multicloud Platform Overview

Before delving into what multicloud solves, I should define what a multicloud is. In short, multicloud is a collection of public and private infrastructure resources (compute, network, storage) administered via a common control and management plane. A basic example is VMware’s Cloud on AWS, where you can extend your on-premises vSphere environment into the AWS cloud using EC2 instances. In this example, vSphere is your management and control plane and your on-premises hardware and AWS EC2 instances comprise your integrated resource pool. In this configuration, you can “seamlessly” migrate vSphere based workloads using standard VMI images to/from the AWS cloud.

This example leads me to the promise of multicloud platforms, namely speed, portability, scalability, and resiliency. These capabilities are attained via one common thread across all multicloud platforms. That thread is consistency. Unlike the data center of the past where environments are assembled from procedural scripts, checklists, and tribal knowledge; multiclouds, powered by the public cloud paradigm, employ automation to manage infrastructure resources. Basic provisioning, scaling, patching, and configuring are managed by the platform. This enables IT to reliably and repeatedly provide accurate and consistent environments to product development teams and business users. Consistency at this layer provides a solid foundation, or known-good state, against which you can confidently develop, verify, and validate change thereby reducing risk and accelerating throughput. Furthermore by managing to a known-good state, operators of multiclouds can more effectively monitor for change, variants, and other issues. Because the platform is adept at recreating consistent environments, it can quickly recover from outages or other issues by “repaving” the resources.

Lastly, consistency enables both portability and scalability by providing the immutable building blocks needed to recreate and reconfigure an instance or environment. It is through these methods that a multicloud platform closes the performance and productivity gaps of traditional IT shops.

A Multicloud IT Shop

Despite the ambitions of tooling providers around the world, the automation, orchestration, and validation alluded to the above doesn’t come out-of-the-box when purchasing a cloud solution. This is because many enterprises support multiple technology stacks, managed by flexible environment contracts, and numerous integration patterns. There are just too many permutations to provide a fully supported, packaged multicloud solution. As such, the onus of building this platform and the myriad of services and tools falls on the Enterprise IT shop.

If IT attempts to build this platform mimicking existing processes and practices, they will inevitably recreate a “newer, shinier” version of what they support today, namely ticket-driven, snowflake configurations, manual processes, which delivers underwhelming outcomes. In order to truly transform, IT must introduce new skills, practices, processes, and tools that change the way systems are defined, deployed, and managed.

 

Key characteristics of the transformed, multicloud shop:

Lean and Agile Principles

Multicloud IT shops have fully embraced lean and agile thinking. Above all else, they value working code in production. That code is manifested in applications and infrastructure running in PRODUCTION. In other words, code defines, deploys, and configures the application and environment running in production. Code also defines the workflow, orchestration, and testing that creates, verifies, and manages the environment and application. What most distinguishes these multicloud shops is that they tirelessly work to improve practices and processes in order to accelerate time-to-production and minimize cost of support. Their transformation never really ends.

Systems Thinking Approach (End-To-End)

Embracing the lean and agile concepts, multicloud IT shops employ a system thinking approach to problem solving. It isn’t enough to optimize the activities of a single team or department, multicloud solutions employ an end-to-end pipeline (or workflow) that mirrors the SDLC and Change Management process. By employing a systems approach, a multicloud shop is always identifying constraints in the pipeline and actively working to remediate them.

API-first Design

API-first design does not equate to deployment automation via procedural scripts. Multicloud platforms consist of an ecosystem of tools, code, and configurations that in-concert define, deploy, and manage your data center. Because integration between tools, the pipeline, and the actual environments is critical, APIs are a language and design pattern of modern, multiclouds.  APIs can be surfaced through catalogues, through orchestrations, and/or via command line. For example, you may use Puppet to define, orchestrate, and manage environmental configurations.  Puppet will be linked to a pipeline (workflow) tool like Jenkins or CodeStream. Automated testing tools that validate and verify the configuration will be linked to that workflow.

Emergent Standards

This is one of the most difficult concepts to embrace in the multicloud operating model as control is shifted to product teams and pipelines from Enterprise Architects and Review Boards.  As application and environments are on-boarded into the multicloud by product teams, architects help build and design service packages. Through these efforts, design patterns begin to emerge across the portfolio. These patterns are then promoted into standards. Standards are managed and maintained via automated tests and checks. Product teams that use standards bypass the Review Process in favor of automated tests. As new service packages are created, the architecture team can pair with the product team to define and ultimately promote new standards.  In a multicloud operating model, the standards definition and review process is evergreen.

Developer Skillset

This is the most visible change when transitioning to a multicloud operating model, namely everyone codes. In practice, an “Operator” no longer applies a patch to a production machine.  Instead, they build and test new automation code that deploys/implements that patch to a known-good version of infrastructure. This development process creates a new version of infrastructure that is verified and validated before promoting to the Production service catalogue. Once promoted, product teams can begin to validate and verify their applications against this new version prior to rolling it out across the data center. This minimizes outages and issues related to platform and configuration dependencies being arbitrarily pushed out. It also put the onus of refactoring the application to function correctly on the new “standard” to development teams.

Summary

Introducing multicloud into the modern data center is more than just new hardware and software. To achieve the outcomes of digital IT transformation, organizations need to also change how they build and manage these solutions. These changes will require a critical evaluation of existing processes, skills, tooling, etc. in order to be successful.

 

The post Multicloud Data Centers and Going D.I.G.I.T.A.L? (Do I Give it to Amazon then Leave?) appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/implementing-multi-cloud-data-center/feed/ 0
Measuring What Matters: An Enterprise DevOps Perspective on IT Performance https://infocus.dellemc.com/bart_driscoll/measuring-what-matters-an-enterprise-devops-perspective-on-it-performance/ https://infocus.dellemc.com/bart_driscoll/measuring-what-matters-an-enterprise-devops-perspective-on-it-performance/#respond Tue, 06 Feb 2018 10:00:49 +0000 https://infocus.dellemc.com/?p=33925 Performance and productivity metrics are important because they give us the needed information to shape or reshape behavior. They provide the feedback and insight to continually improve processes and practices toward our common goal, namely creating value for the customer. Unfortunately in many large IT shops, performance metrics are not aligned toward this common goal […]

The post Measuring What Matters: An Enterprise DevOps Perspective on IT Performance appeared first on InFocus Blog | Dell EMC Services.

]]>
Performance and productivity metrics are important because they give us the needed information to shape or reshape behavior. They provide the feedback and insight to continually improve processes and practices toward our common goal, namely creating value for the customer.

Figure 1: The DevOps Scorecard

Unfortunately in many large IT shops, performance metrics are not aligned toward this common goal and rather reflect the objectives and scorecards of single departments or practices. For example, does a customer care about Uptime? No. Customers/users expect applications and the systems they run on to be up. They don’t care if it has been running for 1 day or 364 days without interruption. Yet uptime (aka the five 9’s) is often a key success metric for System Administrators and IT shops at-large.

Nice work keeping the servers running,” said no CEO ever.

Too often, we focus on IT-centric measures, like uptime, rather than on customer or user success measures to evaluate our performance. I am not suggesting uptime is not important rather if you approach this metric from the customer point-of-view, you will quickly see the measure of uptime is not really valuable to the customer nor does it provide any real value to the organization regarding performance.

To keep it simple, think of your car or truck. When you bring it to a garage would you want the mechanic or service manager to tell you how many days, hours, and minutes you drove it without incident before bringing it into the shop? No. You don’t care. Would that data be valuable to the car dealership or manufacturer? Does it provide actionable data? I would argue no.

But in IT, we think Uptime is a critical, key measure. We bonus people on maintaining uptime levels. We spend time and money capturing, collecting, transforming, and reporting on that data. Yet, it isn’t adding value to either our customer or our own performance.

Borrowing a page from DevOps, we know that flow-based, event-driven metrics are critical to measuring and reporting on the performance of teams developing and deploying applications and infrastructures. Flow-based, event-driven metrics help IT teams answer critical performance questions from the customer perspective. They provide feedback on value creation processes and practices like:

  • How quickly can a request be fulfilled?
  • How quickly can a new or updated capability, function, or service be delivered?
  • How quickly can the system recover from and an issue or outage?
  • How likely is a customer to fail taking an update from you?

These customer-centric questions easily translate into performance measures such as success rate, change cycle time, mean-time-to-recover (MTTR), and release frequency. Additionally, all four of these metrics are directly actionable.

Figure 2: Baseline your performance against industry

For example, if your Uptime metric is missed you then need to generate a new report highlighting downtime. More specifically, you need to inspect downtime to understand why it happened (success rate): why it took so long to recover (MTTR); and, why it took so long to repair (change cycle time).

It is these event-based metrics that provide the insights and data to improve performance. If success rate is low, you can evaluate your quality, verification, and validation processes to understand how and why issues are missed before hitting Production. If your recovery time is too long, you can evaluate the processes of deployment, rollback, and failover to improve adherence to known-good packages and standards. If cycle time is too long, you can evaluate your change management and development processes to accelerate responsiveness and agility.

Furthermore, if you do measure these flow-based, event-driven metrics you will be indirectly managing uptime. If your success rate is high and your recovery time is low, then by default your uptime is high.

And don’t forget the customer, these flow-based, event-driven metrics also correlate to customer satisfaction and value. If IT is responsive to customer requests, timely in its delivery, confident in quality, and quick to recover if an issue does occur, then customers will be more satisfied with the services provided. This is corroborated by the annual State of DevOps Report that regularly suggests high performing IT teams are twice as likely to over-achieve on enterprise profitability, market share, and productivity goals.

So, where does this data come from?

Flow-based, event-driven performance metrics are derived from data generated by continuous delivery (CD) pipelines. Event and application data combined with logs from various tools along the workflow capture key measures that reflect real-time pipeline performance.

For example, an automated test tool, like unittest, will generate a success flag, audit trail, as well as an elapsed time count. This data combined with data from other applications across the tool chain is aggregated by a change or release candidate. Together this data illustrates the success rate and cycle time of the end-to-end process for a single change or release candidate. Change/release data can be further combined to illustrate trends at the application, program, portfolio, or enterprise level.

Granular, real-time data surfaced to teams provides them with the needed information to act. This data can inform a team early that there is a problem with a given release, or that their process is too slow. Furthermore, it points directly to the constraint area or issue allowing the team to quickly swarm the problem.

Employing this proactive measurement model requires a shift in how we design, build, and report metrics from both a technological and business perspective. It requires a clear understanding of desired enterprise outcomes, collaboration across IT and business functions, and modern architectures to be successful. For a deep dive on how to build proactive monitoring systems, I recommend the book The Art of Monitoring by James Turnbull.

Summary

Our experience in helping customers transform IT using DevOps principles and practices has hardened our resolve to the importance flow-based, event-driven performance metrics. Without these metrics it is impossible to prove success with DevOps and more so impossible to understand where to best act next. Metrics are the language of the executive.

If we want to transform the enterprise we need to use a language they understand. IMO, flow-based, event-driven performance metrics are the key.

Recently, I have been working with the thought-leaders at DevOps Research and Assessment (DORA). They have incorporated these customer-focused performance metric into an online assessment tool, also called DORA. The tool not only maps these metrics but also identifies 20 known IT capabilities proven to drive enterprise performance. Check out the assessment and please note that measurement and monitoring are considered a key competency there too.

Watch DORA CEO and Chief Scientist Dr. Nicole Forsgren and CTO Jez Humble’s research program presentation, “What We Learned from Three Years of Sciencing the Crap out of DevOps.”

Sources:

Figure 1: The DevOps Scorecard

Figure 2: DevOps Research & Assessment

The post Measuring What Matters: An Enterprise DevOps Perspective on IT Performance appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/measuring-what-matters-an-enterprise-devops-perspective-on-it-performance/feed/ 0
Crossing the Cultural Chasm with DevOps https://infocus.dellemc.com/bart_driscoll/crossing-cultural-chasm-devops/ https://infocus.dellemc.com/bart_driscoll/crossing-cultural-chasm-devops/#respond Wed, 09 Nov 2016 12:00:46 +0000 https://infocus.dellemc.com/?p=29500 At this point in the DevOps movement if you ask nearly anyone what is the hardest part of DevOps, they will invariably say changing culture. It isn’t the technology, it isn’t even process, rather it is the people that are the impediment to change. Why is this? Why is change so hard? Despite the reports […]

The post Crossing the Cultural Chasm with DevOps appeared first on InFocus Blog | Dell EMC Services.

]]>
At this point in the DevOps movement if you ask nearly anyone what is the hardest part of DevOps, they will invariably say changing culture. It isn’t the technology, it isn’t even process, rather it is the people that are the impediment to change. Why is this? Why is change so hard? Despite the reports and numbers, why is it so difficult to change people and transform enterprises? Before exploring how and why to change culture, let’s first take a closer look at culture and understand what it is that we are trying to change.

Culture is what you do without really thinking about it. It is a collection of behaviors, beliefs, values, and symbols that are passed along through stories, experience, and imitation. Think back to your first month at your current job or employer.   How many times did you hear “…because that is how we do it?” or “…talk to Mary because she can show you how to do it”? That is culture. Culture isn’t written down. You can’t see it on an org chart. And you won’t likely find it in process flows but it is the way of life in an organization. It is the underlying, undocumented operating fabric of your organization.

For me, cultural fabrics are a lot like knit sweaters. They are designed to be resilient. You can push, pull, tug, and stretch an enterprise’s operating fabric but like a sweater, it will bounce back to its original shape. Recognizing these characteristics, you can begin to understand why it is so difficult to change and why it is so critical to dismantle the old fabric and begin to replace it with a new fabric or culture. Too often, changes are introduced without a good understanding of how they fit into the flow (or pattern) of work. They aren’t part of a holistic plan. A good example is agile development. For many enterprises, agile was rolled out to development and test teams with little thought to upstream product management practice and/or downstream operations and release management practices. MVPs and 2-week release cycles were (and are) very disruptive to organizations that typically release quarterly or less frequently. While agile was great in principle, it wasn’t embedded into a more holistic plan. While new experiences where introduced to DEV/TEST teams, product management and operations where still operating in the old model. The models were not consistent and as a result failed to achieve the expected goals and outcomes.

Many transformations struggled because they introduced new behaviors and measures that are unattainable without support and changes to upstream and downstream processes. Rather than optimizing horizontally, or optimizing a single process or activity across many products and teams, DevOps acts vertically, solving all problems end-to-end for a single application and team. This practice ensures that solutions are fully cognizant of potential up/down-stream disruptions. It stays holistic by design. The practice of building end-to-end is commonly referred to as building pipelines.

Unfortunately, this process of building a pipeline culture takes time. At first it is very slow and difficult. As more people use the new way and the new cultural fabric starts to take shape, you begin to build momentum. Over time, and with proven success, your velocity will begin to accelerate. Acceleration requires a deliberate process of intention based on scientific method. We aren’t guessing our way to success we are measuring results, radiating successes, and evolving practices, processes, and tool in the enterprise.

Measuring Results

I appreciate that defining, collecting, and analyzing metrics and data is difficult and for many painful but we need to move past this. Without metrics we cannot be successful in transforming culture. Metrics provide us the data that inform us whether the newly introduced changes are working or not. For example, if your baseline mean time to deploy was 45 days per release and your post-DevOps deploy time is 15 days. You are showing a 300% improvement in release frequency. The metric informs us that the work we did to improve collaboration, build automation, and orchestrate the pipeline is providing the desired outcomes. Without hard metrics, you would be forced to rely on anecdotal evidence and feel-good-stories from users. Metrics will help you get additional funding and support; stories will get you a pat on a back at best. Metrics are also critical in communication. Metrics are easily shared in reports, dashboards, posters, and other fun displays. Success awareness and visibility breed future success and influence late adopters.

But what should be measured?

It is going to be a little different for every organization but the process of defining metrics is the same regardless of industry type, size, etc. It all starts at the top. Every metric you create needs to be aligned with the over-arching corporate goals and objectives. These goals and objectives are where the company is planning to spend money. If you can’t figure out how to be relevant against these goals, then it will be nearly impossible to get time, funding or mind share from senior leaders and executives needed to transform your culture.

In most large enterprises, IT has already distilled these enterprise level goals into IT focused objectives. They usually look something like “drive efficiency or productivity, reduce cost, accelerate delivery, improve quality”, etc. While these are good intentions, they are not actionable metrics. To make them actionable, you need to define specific, measureable targets and then understand what process and practices will be affected to achieve those changes. Goals should be specific, measurable, attainable, realistic and timely (S.M.A.R.T). For example, a large bank wanted to improve earnings per share. IT determined that it could support this effort by improving time-to-market so it could capture revenue earlier and by reducing cost of goods. A specific goal for 2016 was to reduce deployment time from 90 days to less than 30 days. This is actionable. We know what to measure, the outcome is specific and realistic, and we know by when we need to achieve results. Beneath this targeted objective are specific team level metrics used to measure changes in behavior on a day-to-day, sprint-to-sprint, release-to-release cadence. For example, a team can measure # of code commits per week so that they can move closer to the goal of deploying every 28 days or less. If team members aren’t committing code regularly and merge activities are long and painful, then it will be nearly impossible to achieve the 28 day or less improvement target. Code commits alone aren’t going to make you successful. However, they do begin to change behaviors, aka. culture, that is better aligned with frequent deployments. Once a behavior becomes habit, you can replace the metric with something new. Just remember, it typically takes 3-6 weeks and a lot or reminding to develop a habit.

Radiating Success

Radiating change based on successes is inherent to the DevOps model and a very different approach to organizational change. While the outcome, aka. a new operating fabric, is the same, how we achieve that goal is very, very different. Unlike traditional change models, DevOps transformation opts for a “prove” then “radiate” strategy over a “distribute and inspect” model. In the traditional “distribute and inspect” model, the new operating fabric, structure, processes, and tools are defined in advance and then rolled out to teams to adopt. Centralized inspection practices, like CABs or ARBs, work to ensure people are following the new models and processes. This approach is commonly referred to as a big bang rollout. It assumes you have most of the answers up front and will only make minor tweaks as you drive adoption across the organization. The DevOps model, with “prove then radiate” approach, is much more organic. It starts with individual behaviors and uses those small changes to impact thinking and ultimately culture. It assumes that you must learn and adapt throughout the entire transformation.

The “prove then radiate” approach employs a top down strategy to help define the context, goals, and guidelines of the end-state solution however the actual solution is built using a bottoms-up approach. It starts with a single, cross-functional team and allows those closest to the problem to solve it within the context of the larger transformation. It is a decentralized model. This cross-functional team builds a continuous delivery pipeline to support a single application or component. They build an end-to-end pipeline including many of the common sticking points, like regression testing, security scans, compliance checks, and/or performance testing. Where automation tools are available or the opportunity cost of building automation is low, teams will integrate these tools into a single tool chain. Where manual tasks are required, stubs in the workflow will trigger notifications to responsible parties.

Over the course of the pilot, internal champions and SMEs are developed. Often joined with external coaches, this team supports product teams adapting and adopting the new DevOps system. It is the role of external coaches and technical SMEs continue to challenge assumptions that internal champions and experts have grown blind to. They also can provide guidance and training on new tools, techniques, and/or practices as they are introduced. Internal champions built during the Pilot stage are critical to scaling. These internal champions help navigate the organization and political structure. They make key decisions around when and how hard to push. They are the de facto leaders of change. Internal technical SMEs add credibility to the solutions. They also provide new teams with technical support and oversight. They are the hands-on experts proving that this is possible.

As the team builds a solution, they collect feedback and measure the results against the local, group, and enterprise goals. Champions share success stories of early wins that are then extended by SMEs to support dependent applications and teams. As success patterns and processes begin to emerge, new operating fabrics or cultures begin to become the standard. Over time, momentum transforms the enterprise until the old way is either non-existent or retaining a minimal footprint. This process is often referred to as bi-model IT. For additional information on how to implement change in this type of model, I recommend “the Knowing Doing Gap by Pfeffer and Sutton. It recommends the following strategies that I use regularly:

  1. Be clear about why? If you can’t articulate why you are doing something, then how will other people follow?
  2. Do more and teach more. We learn by doing and build mastery by helping others.
  3. Expect failure, drive out fear. Expect some work to be thrown away. Just because you are throwing it away doesn’t mean it has no value. Capture your lessons learned and don’t make the same mistake again later.
  4. Collaborate and listen more; compete less (internally). Internal competition is typically misused creating friction and boundaries between departments and teams. As a result time is wasted and money could be better spent to create new value. New product ideas should compete, departments should collaborate.

“Valley of Despair”

The valley of despair is a dip in adoption or momentum that occurs as all cool new ideas are confronted by the overwhelming inertia of the legacy culture. You can’t skip it, jump over it, or go around it. You need to go through it. The longer you stay in the valley, the more likely your change effort will fail. All change efforts go through the valley. The depth of dip and breadth of chasm are typically proportional to how big and revolutionary a change is. In simple terms, the valley of despair is the amount of time and effort it takes to take to transform something cool and exciting into normal and routine. Remember, you are trying to transform culture. You are trying to create new habits of success.

I liken this process to that of learning to ride a bicycle. As you start, there is nothing but sheer excitement. The training wheels are off; there is lots of support and energy from your family or friends; you are sitting on your bike smiling at the top of a small hill and start rolling down. You are riding and it is cool! This is just like building a DevOps prototype. Things are cool and exciting and you are seeing some early success.

Then you start to wobble. You lose some speed. Lose balance. And crash. It hurts – pride mostly — and you get up yelling at the bike, mad at your family or friends for making you try, and swearing you will never do it again. You are in the valley of despair. Just like your attempt to get another team to use your “new DevOps way”. If you don’t get back on the bike, you never learn to ride. Convincing you to get back on the bike and try again is hard. Just like DevOps.

It isn’t until you can pedal, balance, and steer without thinking about it that you really start enjoying riding a bike. Just like DevOps. It takes courage and perseverance to overcome the valley of despair.

Evolving the Enterprise

The most common pitfall most enterprises fall into when investing in DevOps is to recreate the same system you have today with more automation and a few people with DevOps in their title. My favorite example of this came from a key note at this year’s OpenStack Summit in Austin, the speaker talked of a company wanting to improve their release time from 44 days. They made a significant investment in cloud technologies and automation. After a number of months and a lot of money, they were able to release in 42 days. Needless to say, this wasn’t the result they were looking for. If your sponsors, champions, coaches, and SMEs aren’t willing and able to challenge the status quo and introduce new, imperfect, and at times incomplete systems, processes, and practices they will be doomed to recreate the same mess you are trying to replace. Remember the existing system is your knit sweater! It is resistant to change.

So how do you EVOLVE these systems? And more importantly, how do you build a system that can respond to change?

You iterate.

We have all heard the DevOps mantra. Start small & Build; Measure; Learn. This model it is not new. It is basic scientific method simply applied to IT and application delivery. With the exception of AWS and Sticky Notes, it is how nearly all cool things where developed. While some of the ideas might sound revolutionary like continuous delivery, the pathway to achieving DevOps-at-Scale is much more evolutionary. It starts with recognizing and committing to two things:

First, adopting a DevOps culture is big. It will and does impact nearly every facet of IT. Remember the simple goal of improving deployment frequency from 90 to 28 days. Making that shift requires both the participation and expertise from Development, QA, Operations, Infrastructure, DBAs, Enterprise Architecture, PMO, BizOps, Security, Compliance, Release Mgmt, and more. These groups need to collaborate – even when painful – to ensure that all perspectives are considered since all these team play a critical role in getting to Production. The focus needs to shift from “I did my part.” to “we have new working software running in production available to users”. Anything short of working software in production doesn’t create value for the enterprise. It is nothing more than work-in-progress. If you find yourself focused on anything short of working software in production, you are likely being constrained by “the old way”.

Second, build tool chains for change. Recognizing, that different technology stacks and applications will require different tooling to achieve DevOps or continuous delivery automation goals, the platform architecture must be extensible by design. In other words, a robust API layer must surround the workflows enabling a variety of tools and technologies to support the activities of each step or stage. For example, the build step in your DEV stage may be completed by Jenkins for open source projects and MS build for .NET projects. Both are valid tools and both need to be supported by your platform.. Forcing development teams to pick one over the other will cause productivity issues and potentially resentment in the developer community. The goal is to build tools and process where it is easier to meet expectation and harder to work around the system.

Policy and Standards

When solving complex steps like security, compliance, etc. don’t design and build for the entire enterprise application portfolio. Start with a single application. As more applications are on-boarded into the new model, look for patterns across types of applications, platforms, solutions suites, etc. Promote patterns up the policy stack to maximize reuse of associated testing frameworks and test cases. Using this model, the enterprise-wide layer should be very thin and lightweight as it needs to be relevant for ALL applications. Use feedback loops between development teams and architecture and policy teams to keep standards evergreen.

Summary

If you take nothing else from this long posting, walk away understanding that changing the enterprise is possible. It isn’t easy; there is no magic tool or wand; and many times it will hurt. Despite these and other challenges, the pathway to change is proven and using the three keys that will help accelerate your DevOps journey, namely measure your results, radiate your success, and evolve the organization.

The post Crossing the Cultural Chasm with DevOps appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/crossing-cultural-chasm-devops/feed/ 0
Immutable Infrastructure – Myth or Must? https://infocus.dellemc.com/bart_driscoll/immutable-infrastructure-myth-or-must/ https://infocus.dellemc.com/bart_driscoll/immutable-infrastructure-myth-or-must/#respond Tue, 16 Aug 2016 15:00:36 +0000 https://infocus.dellemc.com/?p=28598 I am as guilty as the next person regarding slinging techno-jargon. In just the last week, I swear I have used or heard used the term “immutable infrastructure” in the context of DevOps, IaaS, and PaaS at least five times.  So what does this mean?  Why should an enterprise care?  What are some of the […]

The post Immutable Infrastructure – Myth or Must? appeared first on InFocus Blog | Dell EMC Services.

]]>
I am as guilty as the next person regarding slinging techno-jargon. In just the last week, I swear I have used or heard used the term “immutable infrastructure” in the context of DevOps, IaaS, and PaaS at least five times.  So what does this mean?  Why should an enterprise care?  What are some of the challenges of moving to an immutable infrastructure?

Well before tackling all these questions, let’s agree on the definition of immutable infrastructure: The phrase “immutable infrastructure” was first coined by Chad Fowler in his 2013 blog, “Trash Your Servers and Burn Your Code: Immutable Infrastructure and Disposable Components”. The model borrows from the programming concept of immutability that states that once an object is created, its state must remain unchanged.  In other words, the only way to make a change to an immutable piece of infrastructure (server, container, component, etc.), is to replace the old version with a new, updated version.

On the surface, the paradigm of immutability sounds like a dream. Production systems are no longer contaminated by ad hoc, out-of-cycle patches, bug fixes, and updates.  Access and credentialing can be simplified since locked systems don’t need developers and testers with root access.  And finally, the configuration management database (CMDB) will finally be a source of truth for enterprises enabling teams to easily replicate and recover production since change doesn’t “just happen” especially when no one is watching.  The benefits of immutability in infrastructure are clear.  So why don’t more enterprise employ this best practice?

Well, it is because adopting this practice requires two critical and circularly dependent practices to be in place:

Enterprise must stop associating value with preserving artisanal infrastructures.

Effort spent creating and maintaining fleets of unique, un-reproducible servers in a data center is anachronistic to an enterprise’s goal of digital transformation. There is much evidence to suggest that these traditional practices of supporting and maintaining long lived (aka. mutable) servers and components increases operational complexity and risk and results in slower and lower quality deployments.
To transition to an immutable infrastructure operating model, enterprises must create a system that enables all changes (infrastructure and application) to be created, tested, and packaged outside of PRODUCTION. Simple changes like patching and OS or deploying a bug fix to an application must move through a delivery pipeline before it is introduced into PRODUCTION.

This delivery pipeline packages the outputs, or artifacts, needed to deploy the change from scratch. It then verifies and validates that the change can run successfully in the data center before promoting it up. As a change nears this promotion point, a new instance is created in PRODUCTION representing the updated/changed server or component. Traffic is then routed to the new instance; and the old version is deprecated and ultimately recycled. This is basically blue/green deployment.

Making this leap to immutable infrastructure and changing the value system, CIOs need a trusted automation platform managed by policy-driven workflows to replace the care and feeding activities of tens and hundreds of expert system administrators and release engineers. Without an automated, end-to-end delivery pipeline for infrastructure, CIOs and their respective teams will not be able to make the transition. This leads to the second practice, namely continuous delivery for infrastructure.

Enterprises require fully automated, end-to-end pipelines to manage the creation, promotion, and deployment of runtime environments.

Without automated pipelines, the cost to introduce a change into PRODUCTION tends to exceed the expected return-on-investment as well as the personal commitment often required to successfully deploy that change. As such, updates and revisions to infrastructure typically get delayed and or bundled into high risk, complex deployments that rely heavily on deep, greying SMEs, system down time, and weekend, death-marches.
The heavily manual practices are repeated to ensure that the artisanal and fragile infrastructures do not fail.  These practices are not designed or intended to support change- rather they are almost coping mechanism to slow change in response to painful past failures in the hope they are not repeated. In contrast, immutable infrastructure practices and the required automation are specifically designed for and intended to manage change.  Automation enables updates and innovations to be quickly created and tested.  While the development and delivery pipeline manages these changes are they are systematically promoted into PRODUCTION.

This automated pipeline isn’t just a few scripts to deploy a base system or services rather it is a fully integrated and orchestrated collection of tools and scripts fabricated to generate value (aka. changes in PRODUCTION).  Where possible, all infrastructure configurations are defined by parameterized code that can adapt to specific workloads and/or applications running on them.  This eliminates the need of experts to hand-craft a deployment.  Furthermore, this enables your infrastructure to be fully auditable and your changes to be traceable.  (Can you imagine your world existing with no one pointing fingers any more?)

Embracing this level of automation and transparency into an enterprise is very possible since much of the needed expertise is already in house.  What tends to be missing is a vision of what is possible, a design pattern for continuous delivery tooling, and most importantly a strong commitment to change.  This commitment to change is often dwarfed by existing stressors caused by work/rework associated with maintaining and managing our legacy, artisanal environments.

Transitioning to an immutable infrastructure model is a journey and should be viewed as such. It demands intention and discipline to build resilient systems and platform.  It rarely happens by accident and will not scale without proper commitment.  The benefits of greater agility, speed, stability, and resiliency are clear.  Are you ready to take on your infrastructure or will immutability remain a myth in your organization?

To learn more about EMC can help you in this transition, please contact us at devops@emc.com.

The post Immutable Infrastructure – Myth or Must? appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/immutable-infrastructure-myth-or-must/feed/ 0
Hybrid DEV/TEST Clouds Fulfill the Promise of Agile Enterprise IT https://infocus.dellemc.com/bart_driscoll/hybrid-devtest-clouds-fulfill-promise-agile-enterprise/ https://infocus.dellemc.com/bart_driscoll/hybrid-devtest-clouds-fulfill-promise-agile-enterprise/#respond Mon, 18 Jul 2016 19:00:10 +0000 https://infocus.dellemc.com/?p=28376 Assuming that you are a large enterprise and not quite ready to leap into the deep end of the public cloud, how can you take advantage of the elasticity and low costs of the public cloud and still maintain ownership and control of your applications through on-premises solutions? One possibility is a hybrid cloud.  Hybrid […]

The post Hybrid DEV/TEST Clouds Fulfill the Promise of Agile Enterprise IT appeared first on InFocus Blog | Dell EMC Services.

]]>
Assuming that you are a large enterprise and not quite ready to leap into the deep end of the public cloud, how can you take advantage of the elasticity and low costs of the public cloud and still maintain ownership and control of your applications through on-premises solutions? One possibility is a hybrid cloud.  Hybrid clouds enable enterprises to exploit a public cloud for lower risk development and testing activities and then transition applications on-premises for “Staging” and for ultimately running in “Production”.

When we look at the public cloud, its strongest value propositions to the enterprise is the cloud’s ability to manage uncertainty. Unlike on-premises, private clouds, the public cloud provides product teams fast access to resources with little to no long-term commitment.  Rather than dealing with slow and lengthy procurement and hardening processes, the public cloud allows teams to rapidly provision and configure new environments that can be used to experiment and innovate.  As new changes are defined, built, and tested, product teams can swiftly create net new environments based on a known-good starting points to verify and validate whether or not an experiment is successful and worth pursuing.  The faster and more reliably this process occurs, the more an enterprise can innovate or pivot in response to market demand.  As confidence and certainty grows, the new application or change can be transitioned back to the on-premises private cloud for final testing and ultimately deployment into production.

The flexibility and scaling of the public cloud enables development teams to parallelize efforts. What this mean is you can support multiple DEV and TEST environments simultaneous for the same application and product teams.  For example, you could be running a long duration regression test in one environment and then create a second matching environment for UI or exploratory testing.  By parallelizing the tests, teams are able to accelerate throughput by addressing two quality gates simultaneously.  As tests complete the environments are decommissioned and the cloud subscriptions ended.   In addition to the obvious cost savings of this approach, development teams are also freed from the common delays (waste) associated with waiting for a TEST environment to be freed up.

The second benefit of a hybrid architecture is the financial model.   The public cloud is optimized for high volume, short duration transactional processes whereas the private cloud is optimized for steady, long running workloads[1]. If we look at the software development lifecycle (SDLC) holistically, we can see that it is little more than a transactional process managing change as that change is defined, built, tested, and ultimately released into production.  The closer to the left (aka. writing code) you look, the greater the number of transactions.  Given this paradigm, the public cloud is a better fit for DEV and TEST stages of your SDLC because it can burst to meet the needed capacity of your development teams and shrink (or disappear) as an application transitions on-premises into “Stage” and ultimately “Production”.  This metered usage model reduces cost since you only pay for resources on an as needed basis.  In addition, the public cloud also minimize manpower cost by eliminating DEV/TEST footprint support from your private cloud and by outsourcing the most labor intensive processes, namely environment provisioning and configuration.

The final benefit of the public cloud for DEV and TEST is the global distribution of the public cloud. A common complaint of off-shore team members is latency experienced when trying to develop and test against applications and workloads running half a world away.  The distributed nature of the public cloud enables IT teams to create working environments that are geographically closer to development teams thereby reducing latency.  While this does add some complexity and requires some additional coordination across teams associated with managing data, code, and configurations, the benefits of having all development and test resources working on high performing environments typically out-weighs that complexity.

Employing a hybrid model that exploits the public cloud for DEV/TEST is not without challenges. Teams without rich test harness, mocks, and stubs struggle with dependencies on other applications and data or integration services that are running on-prem.  Production data often needed for functional and performance testing is not permitted to leave enterprise boundaries thus preventing TEST from running off-prem.  And managing environment configurations across hybrid architectures adds complexity to automation and orchestration solutions.  While these are all difficult problems to solve, there are a number of tools and techniques that can be introduced into the SDLC and deployment process to alleviate these challenges.

But, these challenges are not limited to technology. For many enterprises, the biggest hurdle is overcoming the organizational inertia to change.  Long held beliefs and best practices designed to restrict and standardize (aka. “manage”) compute, storage, and network resources need to be dismantled and redesigned to support the fluidity of DEV/TEST activities; extensible and policy-driven tool chains need to be introduced to support and direct development practices; and, transparent, measured operating models need to be adopted to provide IT leaders and business leaders the needed visibility into application lifecycle and platform management.

The benefits of hybrid clouds are clear. The pathway to adoption has been blazed.  The iterative approach tested.  What is stopping you from taking the leap to greater agility?

Check out the following links to learn more about EMC’s pre-engineered hybrid solutions and public cloud offerings.

[1] For more details on the TCO calculation of public verse private for steady workloads, check out @sakacc‘s blog “http://virtualgeek.typepad.com/virtual_geek/2013/04/what-is-the-real-thing-stopping-cloud-in-the-enterprise-rant.html”.

The post Hybrid DEV/TEST Clouds Fulfill the Promise of Agile Enterprise IT appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/hybrid-devtest-clouds-fulfill-promise-agile-enterprise/feed/ 0
Can You Deploy Applications Quickly? https://infocus.dellemc.com/bart_driscoll/28273/ https://infocus.dellemc.com/bart_driscoll/28273/#comments Wed, 06 Jul 2016 12:55:17 +0000 https://infocus.dellemc.com/?p=28273 I have had a number of conversations recently with customers that have made investments in IT automation tools and platforms.  Some are seeing small gains from the automation but most are struggling to realize the true ROI.  What I typically find is that these targeted investments are point solutions designed to drive efficiency into a […]

The post Can You Deploy Applications Quickly? appeared first on InFocus Blog | Dell EMC Services.

]]>
I have had a number of conversations recently with customers that have made investments in IT automation tools and platforms.  Some are seeing small gains from the automation but most are struggling to realize the true ROI.  What I typically find is that these targeted investments are point solutions designed to drive efficiency into a specific step or task, like server provisioning or application deployment; and as such, they are often disconnected from the underlying value chain, or delivery pipeline.  This disconnect constrains these tools and platforms under the existing communication channels, delivery methodology, and operating models thereby preventing any real gains in velocity, agility, and/or quality.  Without looking holistically at the problem and focusing on end-to-end deployment expertise, the benefits of DevOps will remain elusive.

improve_pipeline

Lean management foundations in DevOps challenge enterprises to widen their lens and build solutions within the context of the end-to-end value chain rather than focusing on a specific task or activity. In practice, this means that you build a complete, end-to-end tool chain every release even though some of the capabilities and workflows may be very rudimentary (even manual) in the early versions. For example, my teams regularly build a deployment pipeline to support a “Hello World” application in Sprint 0.  This MVP pipeline both demonstrates and measures the team’s ability to effectively deploy and configure software and its corresponding infrastructure.  As new tools and automation are added to the pipeline, more robust versions, or iterations, emerge.  This iterative cycle allows new value to be readily recognized by the team and more importantly for new feedback to be ingested.

By focusing on deployment first, the team has the basic pipeline needed to create new artifacts and to monitor and manage those artifacts systematically as they move through the development lifecycle. The basic pipeline provides both the ability to instantiate known-good versions of applications and infrastructure and to layer new changes onto these known-good configurations for testing.  The ability to start from, and/or roll back to, a known-good state is critical for testing changes and/or debugging errors and issues with a new change.  By always starting with a known-good base and minimizing the amount of change you introduce at any one time, teams will reduce the amount of risk and effort needed to conduct root cause analysis and remediation in the event of an issue.  This will help accelerate throughput and improve quality.

Too often, changes are manually deployed to an environment as an update or upgrade. This practice inevitably will corrupt your known-good way points and create “snowflake” implementations as updates are layered on top of each other.  By stacking changes, teams lose traceability and repeatability.  Without traceability or repeatability, it is nearly impossible to define and recreate working configurations and/or known-good versions consistently.  Configuration Management databases (CMDB) are supposed to protect against this risk but often these systems require manual updates and as such, regularly fall out of sync with reality.  As a result, teams struggle to recover quickly from and outage or incident.  Incident management often turns into an expensive game of find-the-needle-in-a-haystack.

Looking past the glitz of automation and the promises of speed and agility, DevOps is about the disciplined practice of continuously improving the delivery pipeline. DevOps provides a framework and approach for refactoring complicated processes and introducing highly collaborative, agile practices that can lead to cultural change in the enterprise.  Much of this optimization and change is done through automation and orchestration tools however these tools are both selected and implemented through a DevOps design pattern that is laser focused on delivery value back to the enterprise.

We all read about the promise of DevOps, namely more releases, faster deployment, higher quality, better recovery time, and satisfied employees. This all starts with your ability to deploy infrastructure and applications successfully and repeatedly to your consumers.  Before getting all spun up about what are best tools and platforms, you might want to examine just how good you are at deploying change.

If you would like to learn more or need help jumpstarting or course correcting your DevOps Transformation, feel free to contact us at devops@emc.com.

The post Can You Deploy Applications Quickly? appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/28273/feed/ 1
Why Is Proving and Scaling DevOps So Hard? https://infocus.dellemc.com/bart_driscoll/why-is-proving-and-scaling-devops-so-hard/ https://infocus.dellemc.com/bart_driscoll/why-is-proving-and-scaling-devops-so-hard/#comments Tue, 19 Apr 2016 12:00:05 +0000 https://infocus.dellemc.com/?p=27065 My team and I regularly meet with customers who ask the same question over and over, “how do we become successful with DevOps-at-scale?”  This question is normally followed with a laundry list of “here is what we are doing around…” infrastructure automation, container pilots, continuous integration, test automation, cloud platforms, etc. The brief summary is […]

The post Why Is Proving and Scaling DevOps So Hard? appeared first on InFocus Blog | Dell EMC Services.

]]>
My team and I regularly meet with customers who ask the same question over and over, “how do we become successful with DevOps-at-scale?”  This question is normally followed with a laundry list of “here is what we are doing around…” infrastructure automation, container pilots, continuous integration, test automation, cloud platforms, etc. The brief summary is concluded with comments about how difficult acceptance and adoption of these new tools and practices are.  So why is DevOps so difficult to scale?

First off, DevOps is extremely pervasive.  Success with DevOps will change or influence nearly every aspect of IT and even many parts of the business.  Even setting a simple goal, like “deploy into production every 30 days”, touches software development, testing, operations, infrastructure management, databases, security, compliance, architecture, release management, project management, and more.  I believe that everyone “knows” that DevOps is big but until you can map out the end-to-end process and rationalize the associated stakeholder and SME community, getting acceptance and/or adoption of DevOps and Continuous Delivery will be impossible.  Success will require the input, expertise, and support of this community.  Don’t underestimate the complexity and size of your enterprise value stream; you will need their support to implement tools, set standards, and drive adoption at-scale.  Tackle it iteratively and use early wins to generate momentum.

For those just starting DevOps, many will attest to how difficult it is to “get moving” or “continue moving”.  The second impediment to DevOps-at-Scale is organizational inertia, or the tendency of an object to keep moving in a straight line at a constant velocity.  Human systems naturally gravitate towards stasis, or steady state, because it take less energy and requires less thinking – this is the inertia.  DevOps confronts both the status quo and this natural tendency by constantly challenging how you budget, plan, manage, execute, and organize around work.  It does this by regularly introducing new tools, practices, and processes design to continuously improve and optimize the deployment value stream.  This new “norm” is especially difficult for team members that are comfortable-enough with status-quo and even more difficult for those that feel threatened by change.  This is commonly referred to as “changing culture”.  Be realistic about pace and be ready to act knowing that more than 80% of your organization is skeptical of success and 10% of those skeptics will outwardly defy change.  Use coaches, collaborative design sessions, internal champions, and executive support to encourage participation and ownership in the end state.  This will help built momentum.

Lastly, have a vision.  It is much easier to build momentum when there is a common, shared vision of where you are going.  Too many customers substitute a tactical list of to do items for vision.  This makes it extremely difficult to connect the parts, define shared expectations, and understand the value.  All “to do” items should be mapped to a targeted near-term objective; each objective should be mapped to one or more organizational processes; each process is linked to annual IT goals that are then connected to enterprise strategy.  Without this enterprise lens, it is difficult to map and measure impact to enterprise success or to prioritize tactical projects based on impact.  If you cannot demonstrate the relevance of DevOps to the bottom line, it will be difficult if not impossible to get funding and support.

Scaling DevOps, like all organizational change, is a difficult and often slow process.  By acknowledging the common challenges outlined above and creating a unified vision of success, you will be well on your way to unlocking the potential of DevOps in your Enterprise.

The post Why Is Proving and Scaling DevOps So Hard? appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/why-is-proving-and-scaling-devops-so-hard/feed/ 3
Can DevOps and Continuous Delivery Work with Commercial Off the Shelf Software? https://infocus.dellemc.com/bart_driscoll/can-devops-continuous-delivery-work-commercial-off-shelf-software/ https://infocus.dellemc.com/bart_driscoll/can-devops-continuous-delivery-work-commercial-off-shelf-software/#respond Thu, 10 Mar 2016 13:00:05 +0000 http://dev-infocus.ovrdrv.com/?p=26358 The short answer is often “Yes”. However, there are limitations and constraints to the level of automation allowed, the return on investing in automation, and influence over the development lifecycle that DevOps practices can introduce when dealing with Commercial-Off-The-Shelf (COTS) applications. For example, COTS applications will typically have a predefined and supported interface for managing […]

The post Can DevOps and Continuous Delivery Work with Commercial Off the Shelf Software? appeared first on InFocus Blog | Dell EMC Services.

]]>
The short answer is often “Yes”. However, there are limitations and constraints to the level of automation allowed, the return on investing in automation, and influence over the development lifecycle that DevOps practices can introduce when dealing with Commercial-Off-The-Shelf (COTS) applications. For example, COTS applications will typically have a predefined and supported interface for managing configuration or accessing data. These endpoints may not be accessible or compatible with your existing tool chain and/or automated change management practices. The decision to onboard an application into a Continuous Delivery (CD) tool chain and to employ DevOps pipelines and practices in the development and management of the COTS solutions are ultimately up to the Enterprise. While it is often easy to lump all COTS solutions together, my experience suggests that these applications typically come in one of three flavors, namely closed, open, and platform, and that each flavor should be evaluated differently.

Closed COTS Applications

Closed COTS applications allow little to no customization to functionality and/or interfaces. They typically have predefined management consoles and a published list of commands or APIs that will allow external applications to interact with the COTS applications. A good example of a closed COTS applications is Microsoft Exchange.

Key considerations:

  • Can configurations be scripted and integrated with existing automation and orchestration tool chains?
  • Can configuration scripts be managed and maintain in external version control systems?
  • How are updates, patches, service packs applied and managed? Can these practices be automated?
  • What developer tools (APIs, SDKs, etc.) are available to extend and/or interact with the application and/or data?
  • Are these tools compatible with existing tools and expertise in the enterprise?
  • Can these developer tools be integrated within the existing tool chain?
  • How frequently are changes made to the base product?

Assuming that these applications can be managed by external systems and code developed to automate deployment, configuration and/or integration can be managed through version controlled pipeline, closed COTS application make good candidates for DevOps and CD. By applying CD principles, COTS binaries are versioned and stored in an artifact repository, install and configure scripts are managed through version control and IaaS workflows, and applications and integration endpoints are auto-tested through deployment pipelines. This is only made possible when consumers of the COTS application collaborate with operators of the COTS platform to ensure that the solution is aligned with enterprise goals, objectives, and standards related to portfolio management.

Open COTS Applications

Open COTS applications allow for significant modifications to functionality, data, and/or interfaces. These platform typically have rich SDKs, APIs, and embedded developer utilities that enable users and developer to modify all layers of the applications, namely presentation, business logic, and data. Open COTS applications typically have a large, complex footprint consisting of core services, data and/or GUI customizations, embedded applications, etc. Classic examples of open COTS solutions include ERP/CRM platforms (like SAP or Oracle) or portal platforms like Sharepoint, ServiceNow, or Websphere.

Key considerations:

  • All considerations from closed COTS solution above
  • How are customization created through internal design editors, logic builders, or data schema extensions version controlled? (Are these configurations stored in database or code?)
  • How are customizations created through internal tools packaged, installed, and configured in higher level environment like Stage or Production?
  • What does the development lifecycle for customization created with internal tools look like? What quality gates are required? Architectural standards?
  • How are custom applications/applets that are embedded in the application packaged, installed, and configured?
  • Can these custom applications/applets be run and tested separate from the underlying COTS application?
  • What does the development lifecycle for customizations created with external tools look like? What quality gates are required? Architectural standards?
  • Does the COTS application provide testing frameworks, mocks, and/or stubs for testing external dependencies?
  • What automated test suites are supported by the platform? (unit, functional, regression, capacity, etc.)
  • What is the upgrade path for customizations? (historically)

As you can imagine, open COTS applications add layers of complexity. In fact, when considering DevOps and CD with open COTS solutions, I suggesting building multiple pipelines to support each layer of the application in the DEV or Commit stage of the SDLC. Subsequent phases, like Test, Stage, and Prod, will use a converged pipeline built from known good artifacts of the DEV/Commit stage. Before implementing a solution, study the development lifecycle and practices of the target COTS application before building your tool chain. Understanding how the end state platform is created and how change is introduced will strongly influence your tool chain and pipeline design. Below is an example of this model.

bart1

Platform COTS Applications

The third type of COTS application are the platforms. These applications provide a set of services and tools as well as proprietary runtimes that enable user to build and run custom applications on the platform. These are the most difficult type of applications to integrate into existing DevOps practices and CD tool chains because common practices like version control, testing, build, etc. are done through the platform rather than from external tools. Sample Platform COTS applications are Pegasystems, IBM Case Manager, etc. Platform COTS are packaged applications that enable customers to build applications on top of the base platform.

As a blend of closed and open, all consideration from above are valid. In most cases, the base platform acts as a closed system whereas the custom develop applications on top act like and open systems. For this type of application it is critical to understand what hooks and services are available to support DevOps practices and CD. Many of these platform are not going to fit within your existing tooling and may not be able to employ the workflows, dashboards, and reporting you have grown accustomed too from your current tool chain.

After analyzing the different types of COTS solution, I still maintain that COTS applications would benefit from the rigor and repeatability of DevOps and CD. And in many cases, CD tooling can be extended to support these platforms assuming there are no significant limitations introduce by the COTS application itself. That said, there are many considerations that should be addressed before starting a DevOps project on these applications.

The post Can DevOps and Continuous Delivery Work with Commercial Off the Shelf Software? appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/can-devops-continuous-delivery-work-commercial-off-shelf-software/feed/ 0
Feeling Burned: 9 Ways to Reduce IT Department Firefighting https://infocus.dellemc.com/bart_driscoll/feeling-burned-a-few-strategies-for-2016-to-reduce-firefighting/ https://infocus.dellemc.com/bart_driscoll/feeling-burned-a-few-strategies-for-2016-to-reduce-firefighting/#comments Tue, 19 Jan 2016 14:20:59 +0000 https://infocus.dellemc.com/?p=25731 I was at a DevOps Meetup recently where the topic of firefighting versus project work was discussed. Not surprisingly, everyone at my table was struggling with this. At worst, firefighting was the job. At best, firefighting regularly interrupted your “real” job. I wish I could say this conversation was unique to this DevOps Meetup but […]

The post Feeling Burned: 9 Ways to Reduce IT Department Firefighting appeared first on InFocus Blog | Dell EMC Services.

]]>
I was at a DevOps Meetup recently where the topic of firefighting versus project work was discussed. Not surprisingly, everyone at my table was struggling with this. At worst, firefighting was the job. At best, firefighting regularly interrupted your “real” job. I wish I could say this conversation was unique to this DevOps Meetup but unfortunately it isn’t. Customer after customer shares the same story about constant pressure to reduce costs and improve performance but are swamped with urgent, often unplanned work leaving little to no time for strategic, important projects that will help achieve flat line growth objectives.

So How Do You Contain The Fire?

firefighting-imageBelow is short list of strategies I have used in my career to help contain the fire. They are in no particular order. Hopefully, you can adopt one or more as a New Year’s resolution to improve your work situation.

Triage Work

I once had a boss that went into a defect system and deleted all the SEV4 bugs. At first, I thought he was crazy, then I realized he was brilliant. The team was NEVER going to get to the SEV4 bugs. Many where over 200 days old. We wasted time, energy, and focus acknowledging them.

Triaging is an important skill and strategy for both managing large lists of work and for setting the expectation of stakeholders. I recommend creating three categories, namely work will get done, work that might get done (best case), and work that won’t get done.

Grouping

Grouping, or bundling, is a strategy of grouping similar items together and fixing them all at the same time. Basically, you gain efficiency by minimizing task switching and enabling team members to focus on a single problem area. This practice requires a published and searchable list of work that is shared.

Limiting Work-In-Progress (WIP)

This is one of the most effective and underutilized strategies for containing fires and suppressing new ones. In its simplest form, it means don’t start something new until you completed what you are working on. There are very few requests or issues that can’t wait until you either hit a logical stopping point or completed the task. Minimizing task switching is a proven method of improving both productivity and quality. When I start feeling stressed about not making enough progress or overwhelmed by the amount of work to do, I employ this method. It is amazing how a little focus can transform your work output.

This strategy scales very easily. As more people are added, you can adjust your WIP limits. Never have WIP limits bigger than the size of the team. I recommend 70% of team size (round up). The challenge with this approach is discipline. You have to respect and follow your WIP limits even when requests are piling up.

Fix Plus

Fix plus is a firefighting prevention strategy designed to reduce technical debt that is causing fires to reoccur or start. This practice involves analyzing and refactoring upstream and downstream systems and code when resolving an issue. Basically, it means whenever you fix something, analyze the related code and configurations just before and just after the area you are fixing and improve those as part of your fix. This is a proven practice for refactoring fragile legacy systems and for building automated test suites without creating a large project to do it. Besides, it is very rare for a company to fund a refactoring project because it is not perceived to add value.

Staff Augmentation

Bring in partners to temporarily offload firefighting activities so that you can focus on important, strategic projects like building a continuous delivery pipeline and tool chain or developing a self-service portal. Focus efforts that will immediately improve quality, resiliency, and performance of your IT organization, enterprise systems, and application portfolio. The goal is to offset the cost of augmenting the staff with the outcomes and expected return-on-investment from the strategic project. If you are starting a new project add a line item for this extra support to free up SMEs to work on the strategic project. For example when I was leading a large application rewrite for a financial services firm, the client recognized in advance that the demands on their actuarial team to meet business objectives and to support the development team exceeded the capacity of the actuarial team. To prevent multiple failures and delays, the client added temporary headcount to offset the hours our Hedging SME spent supporting the development project. This isn’t a no cost option but it can be a life saver for teams that are really struggling with firefighting activities and heavy workloads.

Visibility

If it isn’t in a report, on a dashboard, or managed by a shared tracking system, it didn’t happen. One of the biggest issues with firefighting is that the work is largely invisible to the organization. What the organization sees is high costs coupled with suboptimal performance instead of all the work you are doing to “keep the lights on”. Improving visibility, IT shops can illustrate how and where cost occurs and use that data to justify future investments and as importantly use that data to help triage requests in alignment with corporate goals. Without shared visibility, it is impossible to make good decisions.

No Heroes

If you read the Phoenix Project, this is referring to resident rock star, Brent. Brent spent nearly every day solving everyone else’s problems or making all the technical decisions because he was that good. As a result, he never got any of his work completed on time and was a major bottleneck for the whole department. Stop recognizing and rewarding Brent for hording knowledge. Instead reward Brent for sharing knowledge and elevating others.

Define Boundaries

Defining boundaries isn’t building a wall to protect rather it is about defining the rules around how you operate and interact with others.  This is best implemented at the team or department level and should include details around how new work items are added to queue, how queue is prioritize, and what truly defines a ‘fire’.  This does require leadership support particularly during the early stages as the organization adjusts to the new rules. Remember, this isn’t about stopping work from coming in rather it is about making sure that you are working on the most important and most valuable items first. I commonly hear that unplanned work is the biggest enemy.  If you combine this strategy with Visibility, Triage, and WIP Limits nearly all unplanned work other than outages are eliminated.

Tomato Timer

This is an individual or team strategy for blocking off quiet time to work.  Use a physical timer, set to 30, 60, or 90 minutes, and set it somewhere visible.  Then close the door, put up a “Do Not Distrub” sign, put on headphones, etc.  This is quiet, focused work time.  During this window you do not check email, TXT, IM, phone, etc. and you do not engage anyone that comes looking for you until the timer is completed.  You work on items to completion and get stuff done. When the timer finishes, open the door, take down the sign, etc. You are in an open window and can interact freely.  Set up multiple blocks like this daily. I had a SCRUM team teach me this one. They were in a shared open space and were finding it difficult to concentrate and solve complex problems. The team implemented this strategy very effectively so that everyone could have quiet time.

I am sure there are many other strategies for time-management and reducing firefighting.   I hope some of these are valuable to you.  One thing I know for certain is that you can’t even think about transforming your business, using DevOps, until you solve the firefighting problems you are facing in your current IT environment.  If you would like to learn more about DevOps, click here.

Good luck and put down that hose!

The post Feeling Burned: 9 Ways to Reduce IT Department Firefighting appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/feeling-burned-a-few-strategies-for-2016-to-reduce-firefighting/feed/ 1
DevOps – Not Just for Unicorns https://infocus.dellemc.com/bart_driscoll/devops-not-just-for-unicorns/ https://infocus.dellemc.com/bart_driscoll/devops-not-just-for-unicorns/#respond Wed, 04 Nov 2015 13:00:18 +0000 https://infocus.dellemc.com/?p=25098 Last week, I had the pleasure of attending the DevOps Enterprise Summit (#DOES15) in San Francisco where I was able to spend three days surrounded by people thinking about, talking about, and most importantly figuring out DevOps in the enterprise. What was abundantly clear is that DevOps is not just for the unicorns, aka. Web […]

The post DevOps – Not Just for Unicorns appeared first on InFocus Blog | Dell EMC Services.

]]>
shutterstock_277822397Last week, I had the pleasure of attending the DevOps Enterprise Summit (#DOES15) in San Francisco where I was able to spend three days surrounded by people thinking about, talking about, and most importantly figuring out DevOps in the enterprise. What was abundantly clear is that DevOps is not just for the unicorns, aka. Web companies, as evidenced by keynotes from Target, HP, GE, and CapitalOne. In fact, what resonated with me and other attendees is that the principals and practices of DevOps can be employed in any environment with any technology stack to improve quality, accelerate deployments, and shift organizational culture. And yes, that even includes the mainframe.

Below is short list of key takeaways from the conference that will shape how we work with our customers, providing you with a more valuable IT transformation experience.

Community

Building a cross-functional community is a critical investment needed to radiate DevOps. In nearly all the success stories shared at the conference, it was clear that the strength, structure, and accessibility of the community were a key indicator of success and adoption. Strength came from both coaches and SMEs who defined the DevOps guardrails and partnered with development teams to learn and practice DevOps. These coaches and SMEs also sponsored events like Hack-a-thons or internal DevOps days to bring people together and bridge the common gaps in understanding and perspective that prevail in the Enterprise.

A second common community trend was making space for teams to learn and practice DevOps. This space consisted of physical office space and tools configured to support team collaboration as well as time, to learn and experiment with new tools, techniques, and practices. A number of speakers from HP, Target, IBM, etc., described DevOps Dojo’s and/or Centers of Excellence which are physical spaces, supported by coaches and SMEs, where whole teams can come to learn and practice DevOps against an active development project.

Automate, then test, test, test

DevOps is as much about going fast as it is about building high quality. Test automation integrated into the delivery pipeline was a key theme in this year’s DevOps Summit.   Interestingly enough, it was not automation for the sake of reducing cost or lightening the testing burden. Rather, it was focused on shortening feedback loops and proactively identifying and resolving errors before they were promoted into Production. While this does reduce the cost associated with remediating a defect and/or outages caused by missed defect, I found it interesting that the key driver was performance and quality rather than cost saving.

A couple other tidbits related to testing — Include security and compliance testing in your tool chain. Waiting until late project stages to run penetration testing or PCI checks can introduce complex defects that must be fixed prior to deployment. These types of errors can seriously derail a project. Multiple practitioners recommended running these scans early and often in the development lifecycle. Better still, introduce code analysis tools and pretested code libraries and frameworks that prevent vulnerable code from entering the pipeline.

Lastly, you can even create pipelines for Mainframes. One example being IBM’s Rational® Development and Test Environment for System z® enables developer and test teams to emulate a zOS hardware configuration on Linux. Now even mainframe code can be managed in pipeline without the cost of MIPS and resources on an actual mainframe.

Scientific method

Employ the scientific method to test hypothesis and measure results until the best path is identified. Changing without measuring the outcome doesn’t provide you with the needed information to learn and adapt. Recheck assumptions and make course corrections as needed throughout the DevOps transformation process. There is no single right way to do DevOps. It is a journey and will be unique for every enterprise and every portfolio. Deming had it right over half a century ago — Plan. Do. Check. Act. (Deming)

ChatOps

ChatOps is the integration of collaboration tools with monitoring and management tools so that subscribers can get real time updates, alerts, and notification on environments and applications through a single pane.  What I found so interesting about ChatOps was not the single pane notion, rather how ChatOps helped foster community. First, ChatOps provides the same information to team members from different departments in real time – everyone knows there is a problem. Second, ChatOps provides a vehicle for the team to collaborate on resolving issues – less finger-pointing more joint discovery and resolution. For more info on ChatOps specifically, check the ChatOps for Dummies book.

While these are only a few of nuggets I gleaned from the session, there was much more to learn from other participants and speakers. I am looking forward to DOES2016 – maybe in Austin.

The post DevOps – Not Just for Unicorns appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/bart_driscoll/devops-not-just-for-unicorns/feed/ 0