Accelerating your IT Transformation with a Software Defined Data Center
In my last post in this series on the IT Transformation Storymap, I talked about how IT can transition from being a systems integrator to becoming a service broker. This post will focus on the Software Defined Data Center (SDDC) and the role it plays in accelerating IT Transformation.
The hallmarks of IT Transformation, as depicted in the Storymap, are increased elasticity, lowered maintenance, greater efficiency, and more control. Many of these can be delivered directly by the Software Defined Data Center. When think about it in terms of the hybrid cloud vision, the key to being able to run the workloads you need to run where it makes most business sense to do so is the ability to move those workloads between public, virtual private, and private clouds. This needs to be done in such a way that eliminates the need to worry about compatibility and performance across different cloud models. This is where the SDDC comes in.
What the SDDC allows us to do is configure our infrastructure and operations in such a way that we can fully articulate what a workload is in terms of a profile for performance, availability, security, compliance, and other attributes. But more so, it allows us to be able to run that workload on any one of those cloud models so long as the specific service provider is able to meet the commitments defined in the attributes.
So, in order to be able to realize that vision, we need to have an infrastructure that is capable of being defined by policy rather than by the manufacturers’ vision at the time of implementation. This is the basis of what vCloud suite and frameworks like ViPR deliver. You start with vanilla infrastructure that is configured and deployed by attributes defined by the software suite in order to meet the requirements of the applications. This is a wholly new approach to data center management. In the past, once a system was stood up in the data center, its configuration, by and large, was considered defined. The amount of RAM, CPUs, hard drive space, etc. deployed to it was often based on what was in the system when it was put into the rack. Now, we have the ability through virtualization and extended by software defined frameworks like vCloud, to configure the assets and security associated with the system in order to meet the requirements of a particular workload, blueprint, or pattern of compute.
The SDDC allows us to create, according to policy, blueprints or patterns of compute that make up the basis for running workloads. The ultimate goal is to reduce the number of different patterns that we have to deploy across the data center. We want to reduce the amount of manual updates and configuration changes that need to take place. Instead, we want these things to be automated. The SDDC helps to reduce lock-in to a particular frame by configuring vanilla infrastructure to be dynamic, so that if tomorrow a particular workload needs more compute, memory, or storage, we can deliver that without having to interfere with the workload itself. This allows us to be very flexible in terms of the way we configure and reconfigure our infrastructure to meet the changing needs of our applications.
As I’ve mentioned in previous posts, business agility is completely intertwined now with IT agility. In previous days, configuring infrastructure was a lock-in, which often meant that the IT organizations, and by extension, the business units they serve, were stuck with particular configurations or architectures because of the sunk capital associated with configuring them initially. The SDDC breaks us free of some of those bonds.
In the next and final blog in the series on the IT Transformation Storymap, I will be talking about how to ultimately get these workloads into the ‘city on the hill.’