Harri Kallioniemi – InFocus Blog | Dell EMC Services https://infocus.dellemc.com DELL EMC Global Services Blog Wed, 21 Feb 2018 14:18:07 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.2 Dell EMC Services Podcasts Harri Kallioniemi – InFocus Blog | Dell EMC Services clean episodic Harri Kallioniemi – InFocus Blog | Dell EMC Services casey.may@emc.com casey.may@emc.com (Harri Kallioniemi – InFocus Blog | Dell EMC Services) Dell EMC Services Podcasts Harri Kallioniemi – InFocus Blog | Dell EMC Services /wp-content/plugins/powerpress/rss_default.jpg https://infocus.dellemc.com Hybrid Cloud Cookbook: DevOps is Not Just Platform 3 Hype https://infocus.dellemc.com/harrik/devops-not-just-hype-cloud-native-apps/ https://infocus.dellemc.com/harrik/devops-not-just-hype-cloud-native-apps/#respond Wed, 01 Jun 2016 09:00:29 +0000 https://infocus.dellemc.com/?p=27516 I often get from customers that they are not Platform 3 companies which build fancy consumer facing mobile applications, and thus there is no need for continuous delivery and other DevOps promises.  They are absolutely right, and at the same time they are ignoring the best things that have hit the IT industry in a […]

The post Hybrid Cloud Cookbook: DevOps is Not Just Platform 3 Hype appeared first on InFocus Blog | Dell EMC Services.

]]>
I often get from customers that they are not Platform 3 companies which build fancy consumer facing mobile applications, and thus there is no need for continuous delivery and other DevOps promises.  They are absolutely right, and at the same time they are ignoring the best things that have hit the IT industry in a long time.

If I look back at my 20 years in this industry, I cannot find a single IT project where we couldn’t have substantially benefited from DevOps, even when I was doing PL1 programming on mainframe. What confuses people is that we talk about continuous deployment as the goal—but this is really only a side effect of running professional IT—and that is what DevOps is all about. In other words, going from individual-based artisan work to team effort where basic processes and platforms are well-defined and maintained.

There is a fundamental approach issue in traditional application development projects. We tend to believe that testing and deployments are phases after development work has been finalized, and there is no need to worry about those before we get to that phase. This leads to the two major issues. First, we run into more quality issues and, second, our time to deploy new releases becomes much harder and longer than anticipated, which causes the project to be delayed. The sequential nature of traditional application development, with its complex processes, leads to a highly stressful, compressed timeline to resolve issues as they are found and this pressure only increases with each failed round of testing

I have numerous real life experiences of projects that were considered to be relatively on time and on budget at the end of the development phase, but instead turned into massive problem projects once we entered the testing phase. Most of these issues would have been avoided, or at least mitigated better if we had followed the principles of agility (do small iterations and fail early) and DevOps (design project with testing and deployment in mind).

So what would have been different? First, if we had defined test cases against every functional requirement already in the design phase, we would have noticed that many of the requirements were vague and ubiquitous and there was no common set of success criteria between users and developers. But in traditional model we discovered that only once we had done the coding work, leading to costly and time consuming changes. Properly defined test cases force everybody to agree on success criteria and it flushes out all poorly defined ubiquitous requirements.

Secondly, we always underestimate both the effort needed to setup and maintain different environments (development and 2-3 test and production environments) and the effort to move (deploy) the applications to an environment.  Why is that? The issues related to coding and functionality are relatively easy to uncover, but many of the issues are caused by the environment (configuration issues, bugs in the underlying software like drivers and servers, etc.) These tend to be the difficult ones to figure out. These issues usually start popping up only once you put more load on the applications, which leads to the symptoms appearing in different places. It’s like the human body – your finger is numb, but the cause of the problem is your neck that is too stiff. Once you finally find the root cause, you need to remember to apply the fix to all environments and test that they all still work after the fix. This is a simple task that is surprisingly omitted both because of the many other things needing attention and because of the many times different environments are maintained by different groups (development team handles development and part of the test environments, while the production team handles rest).

The last piece is the difficulty of deployment. You need to deploy your application code and configurations in the exact right sequence with the right steps performed for each task, otherwise there is a good chance of the application not working. Many times this resembles voodoo more than the 21st century high tech as one single step done in the wrong order can lead to mysterious issues without anybody being able to explain why that is. Naturally, most of these are not documented features and the correct sequence is found by the trial and error method. The larger your team is and the more components the application has, the more complex the deployment gets. This all leads to human error as the people are trying to do things under enormous time pressure.

Once you are in the depth of the negative cycle, it’s very difficult to pull the emergency brake and call for a month break to do the work that should have been done in the design phase. You need to plan how you maintain the environments and have the change processes in place. You also need to design your application architecture not only from a run-time perspective, but also from a deployment perspective. This begs the following question: how do you break up the environment into smaller pieces which are less complex to deploy? The answer to this is that one must define the roles and responsibilities in the deployment process and automate as much as possible in order to avoid human error.

That is what DevOps is all about—it is about team work, constantly improving your processes, finding bottlenecks, improving the efficiency of your team’s throughput and designing for success.  Think of continuous delivery as the byproduct of a well implemented DevOps environment in your organization. Believe me, during the final days before any development project  is scheduled to go live, any organization would do almost anything to have the ability to deploy successfully 10 times per day.

 

The post Hybrid Cloud Cookbook: DevOps is Not Just Platform 3 Hype appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/harrik/devops-not-just-hype-cloud-native-apps/feed/ 0
Hybrid Cloud Cookbook: Does Workload Placement Matter? https://infocus.dellemc.com/harrik/workload-placement/ https://infocus.dellemc.com/harrik/workload-placement/#respond Mon, 23 May 2016 09:50:00 +0000 https://infocus.dellemc.com/?p=27519 Workload Placement  One size does not fit all. There will not be one silver bullet that magically works for all your needs. The same goes for cloud and infrastructure choices. The number of options is expanding, to meet the unique business needs of all types of organizations. We see increasing numbers of hybrid cloud solutions […]

The post Hybrid Cloud Cookbook: Does Workload Placement Matter? appeared first on InFocus Blog | Dell EMC Services.

]]>
Workload Placement 

One size does not fit all. There will not be one silver bullet that magically works for all your needs. The same goes for cloud and infrastructure choices. The number of options is expanding, to meet the unique business needs of all types of organizations. We see increasing numbers of hybrid cloud solutions as well as various configurations and options within private and public cloud environments. With this choice comes risk; the wrong choice of workload placement can degrade performance and scalability and ultimately make applications unusable.

When considering workload placement you have to consider a wide variety of factors, ranging from technical requirements to legal aspects, as seen below:

  • Performance requirements (like IO profile, latency and jitter requirements)
  • Resiliency requirements
  • Needed underlying services (data, network and security services)
  • Data privacy consideration
  • Software license terms
  • Security
  • Infrastructure management requirements
  • Integration to existing systems

Getting Started

Larger companies easily have hundreds, if not thousands, of applications. With this, most companies do not have an up-to-date catalog of the applications; dependencies between the applications; and defined infrastructure each one is using. This, as starting point, is challenging for any workload placement decision and leads to a trial and error approach. Knowing this, the foundational first step is to get a decent view of the application landscape.

While collecting this information, the key is to collect all the data that is needed throughout the workload placement decision-making process, not just in the first step. You need to understand what is relevant data, and not to overdesign and collect unnecessary data, as that just creates complexity.

Excel is easy to start collecting data but will quickly become a bottle neck. Having the proper tool will help to collect and store needed data, and it provides the power to analyze and view the data from multiple angles.

Cloud Readiness

Putting the legal and other non-technical issue aside, the biggest question of workload placement to a cloud environment is to determine how cloud-native your applications are. Usually people refer to The 12 Factor App methodology and microservices when defining apps as being cloud-native. Keep in mind that applications don’t need to be 100% cloud-native (like horizontal scale out) when considering a move to an IaaS cloud. The most critical aspects of being cloud-ready include assessing the following:

  • Ability to run on standard, virtualized infrastructure
  • Resiliency from the underlying infrastructure
  • The amount of proprietary or hard coded dependencies to underlying services

We have built applications the past 30 years based on the “always on infrastructure” principle. We expected infrastructure to provide stable and predictable services and there was no need to code resiliency inside the application. And it’s not just the infrastructure, but this also goes for all services that the application is consuming and using. In the cloud, you need to cope with situations where latencies can change from milliseconds to seconds and even an occasional unavailability of a service.

You can also expect your applications to have a number of dependencies, requirements and hard-coded configurations related to underlying infrastructure and services—it could be as simple as a certain version of an application server. Creating applications that are agnostic to underlying infrastructure and services takes considerably more time and effort to do. With the budget and time constraints everybody has to live with, and frankly because we often get lazy, we take the shortcuts and use proprietary or hard-coded services which make it harder to move applications.

Achieving Price Benefit

Assuming your applications are cloud-ready, the next element to consider is cost optimization. Public cloud pricing models are different than what you have gotten used to with internal IT or with outsourcers. It has been years, if not decades, where we have optimized our workloads to squeeze the best results out of the existing models. Given the complexity, variety and velocity of cloud pricing models, there will be a learning curve requiring new procurement skills from your organization. With this, getting the cost benefit can require you to optimize the applications for cloud pricing models. The bare minimum is to set policies and enforce timely decommissioning of services in order to avoid invoice surprises.

Does Workload Placement Matter?

At the end of the day, workload placement can help drive down operating costs, create healthy competition, and serve as an alternative to the current infrastructure. A workload analysis will also yield improved situation awareness (an applications catalogue with dependencies) and any improvements that have been done to the application architecture and deployment processes (DevOps) to get them cloud-ready. These will increase your agility and reduce incident resolution time and risks.

The post Hybrid Cloud Cookbook: Does Workload Placement Matter? appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/harrik/workload-placement/feed/ 0
Hybrid Cloud Cookbook: New Role and Skill Requirements https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-new-role-skill-requirements/ https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-new-role-skill-requirements/#respond Mon, 15 Feb 2016 13:03:17 +0000 http://dev-infocus.ovrdrv.com/?p=26324 This blog is part of the Hybrid Cloud cookbook series which addresses key concepts of building and running a hybrid cloud environment and organizational transformation. Numerous consulting companies and universities have focused on addressing the issue of organizational transformation. This blog is centered on the need to organize teams around outcomes, not around functions, in a […]

The post Hybrid Cloud Cookbook: New Role and Skill Requirements appeared first on InFocus Blog | Dell EMC Services.

]]>
This blog is part of the Hybrid Cloud cookbook series which addresses key concepts of building and running a hybrid cloud environment and organizational transformation.

Numerous consulting companies and universities have focused on addressing the issue of organizational transformation. This blog is centered on the need to organize teams around outcomes, not around functions, in a Service Oriented environment.

Introducing New Roles

Well-defined new roles and functions are critical for successful transformation.

  • Human Change Management and overall awareness assist employees in understanding what is expected from them and how their role will change in the new organization.
  • Clear communication and awareness reduces employee anxiety towards transformation, keeping them focused and aligned with the shifting organization.
  • When the new roles have been defined, organizations should seek to map existing roles to new roles and functions. Employees should not feel that their job is going away, but they are being repurposed to a new role to support corporate objectives.
  • Employees performing their new roles do not need to initially move in the organization, they can remain matrixed; the key is for them to being playing their new role and interacting in such a manger with the larger organization.

Below are a few examples of new technical roles in a Hybrid Cloud organization:

  • Blueprint Developer – creates automation blueprints and manages the different versions and life cycle of blueprints. Also, ensures that IT policies are implemented and enforced to the published services.
  • Cloud Engineer – Combines aggregated services using internal and external technical services and ensures proper integration to needed internal systems.
  • Cloud Analyst – follows and reports consumption and works across the technical components and clouds. Key role which provides facts for decision making and enables quick reaction to demand changes.
  • Integration Manager – ensures that different clouds and services are integrated to key delivery processes (change, incident management, reporting, etc).

Infrastructure resources which are left supporting the legacy environment will need to be cross-skilled, with deeper expertise in one or two areas for the Hybrid Cloud environment. As an example, EMC Presales has run major-minor –a program where everybody is required to have knowledge in two areas beyond their main profession. The minor topics are not geared toward learning new EMC technologies, but taking growing critical areas outside of and employee’s core skills, such as learning application skills.

A New Way to Run

Once you have defined new technical roles, the next step and change is a bit more fundamental— start running your IT as business. In a nutshell, it means to divide the IT organization into roles where supply and demand for services are actively managed.

Traditional IT has been organized around roles such as Enterprise Architects, Project Managers, Administrators, Network, Storage and Compute Engineers. These skillsets are still required in a hybrid cloud model, but they will be performing new functions. In creating a Service Oriented model, organizations need new roles and functions such as, Service Design, Relationship Management, Demand Management—all the functions required to run IT like a business.

A bit farfetched? No, EMC IT is doing it. The idea is simply to move from a product and technology centric model to a service and customer oriented model. IT organizations will now have a group of IT resources looking at user demand, optimizing service portfolios, promoting available services and reporting those back consumption and cost transparency.

harri

Here are example functions in your new IT business management office:

  • Service Management Office
  • Relationship Management
  • Service Portfolio Management
  • Services Sales and Marketing
  • Governance, Risk and Compliance
  • IT Finance and Cost Optimization
  • Business and IT Alignment Office

Shadow IT Drives the Change

The main driver for the customer centric model is to embrace choice instead of fighting against it. The world where all IT was provided by a single entity (either in house or external) is no longer a viable option.IT needs to start understanding the demand and requirements and link that to the best capability to meet business objective. Thus, your organization becomes a service broker.

With this, you will not need resources to perform a 30 step manual provisioning processes, but rather people who can analyze application workload requirements and assign it to the right cloud environment with the right policies. Every infrastructure resource needs to upskill themselves with applications skills – for example, how an Oracle DB works and what infrastructure.

Final thought – It’s a process

IT makes big demands of its organization. Not only do IT resources need to be literate in multiple technology areas, but they also need to master the influencing and communication skills and adapt to the continuous change.

To achieve that, Gartner claims a bimodal-IT requires segregation between resources in the legacy and greenfield environments. EMC’s experience has shown that having a small team innovating and driving new is the appropriate starting place. However, eventually most legacy resources will be repurposed to the greenfield.

Regardless of the route you take, it will be a process. Unlike with artificial intelligence, humans are not as good at adapting to other people’s experiences, which leads to individuals needing to go through the learning curve. Trying to address all of the people at the same time and way will lead to mediocrity; an organization cannot absorb that much change at once. Knowing this, I suggest trying to get your first 30% to lead the way. As an example, we are running a “Vanguard” program to achieve these initial successes. The members of the program do not belong to any elite group, but are tasked with being change agents for driving the change among their peers.

The post Hybrid Cloud Cookbook: New Role and Skill Requirements appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-new-role-skill-requirements/feed/ 0
Hybrid Cloud Cookbook: IT Service Catalog Design https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-it-service-catalog-design/ https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-it-service-catalog-design/#comments Tue, 09 Feb 2016 10:00:40 +0000 https://infocus.dellemc.com/?p=25752 This blog is part of the Hybrid Cloud ‘cookbook’ Series that touches the key concepts of building and running true hybrid cloud and IT-as-a-Service (ITaaS). In this blog I’m touching an important concept of true cloud – self-service and service catalog. These concepts are considered to be the above the line part of ITaaS, which […]

The post Hybrid Cloud Cookbook: IT Service Catalog Design appeared first on InFocus Blog | Dell EMC Services.

]]>
This blog is part of the Hybrid Cloud ‘cookbook’ Series that touches the key concepts of building and running true hybrid cloud and IT-as-a-Service (ITaaS).

In this blog I’m touching an important concept of true cloud – self-service and service catalog. These concepts are considered to be the above the line part of ITaaS, which is visible to the users. On the other hand, the below the line part is automation, which I have covered in a separate blog. Note that services can be consumed both through portal and the application programming interface (API).

The service catalog is much more than a page in a self-service portal. It is where you set and implement your IT policies in the self-service world. With this, when you define your service catalog, you need to define all of the elements needed to deliver ITaaS in addition to any policies and contracts (as depicted below.

Service_description

  • Characteristics: what you get as part of the service (both fixed parts and user selectable parts)
  • Policies: the optional and mandatory policies you associate with the service (like data protection, compliance or approval thresholds)
  • Contract: cost of the service, SLA, reporting of the usage, etc. Also, how and when the service is terminated.

Many Levels of Services

A typical service in a service catalog would “provision one Windows 2008 server.” A service can also be directed to an end user audience such as “onboard a new employee” or “deploy development environment”.  In turn, such end user facing services are often composed of lower level services, sometimes described as service elements. You can think of the services in a catalog as a hierarchy, where multiple low level services are used to accomplish a more complex service (as seen below).

Dev_environment

Approach for Building a Service Catalog

Building a comprehensive service catalog is not a trivial project, so you also need to think of how you approach it. You have two fundamental approach choices – top down or bottom up. Top down means that you publish a service item to the catalog but you do not necessarily have the entire technical solution in place to deliver the service in an automated fashion. On the other hand, the bottom up approach means that you build the low level, fully automated technical services first, and only then start to build more complex higher level services.

Technical IT staff usually prefers the bottom up approach (needs to be 100% ready before you release something) but I personally prefer the principle of agility – only build for need. You can go ahead and publish your top 20 or 50 services and only implement a bare minimum technical solution (for example, an email to the appropriate person who would then do it the old way). Only when you learn which services are actually needed will you start to build the low level services below it with the associated automation. The benefit of this approach is that it starts teaching the organization to use the self-service catalog and to start thinking about your IT as service. Expectations must still be managed for execution time, as this does not improve much more due to the provisioning still being performed the old, manual way.

In real life, you seldom attain (and should not attempt) to have a fully implemented, end-to-end, automated service catalog. There are several reasons for this. First, such an investment is quite substantial. You may reap most of the benefits by focusing on a smaller set of critical or frequently consumed services. Secondly, not all business services easily lend themselves to automation or to such a hierarchical delivery model. In such cases, it makes more sense to still use humans as the service integrators and equip them with the tools, skills and rights to do that.

Governance

The last point to consider is governance and user rights – who can do what. A good self-service portal allows user group specific views and shows only allowed services.  You need to consider both “right to do” and “skills to do” aspects. Technical services are usually too complex for non-technical users, and even for technical users, you want to limit how big the services are and how many of them they can go and provision automatically without any control (cost, capacity, etc). You also need to consider segregation of duties and compliance aspects.

Summary

Implementing a service catalog is a main element when it comes to using the cloud. Similar to what I explained in my automation blog, I suggest not overdoing the service catalog design. Instead, it is better to build it as a process where you learn what is important to your organization, as neglecting the basic parts can lead to situations where fully automated self-services lead to a dramatic rise in cost (no control, unlimited term, etc.), and to compliance and user satisfaction issues (too complex and services that are too technical).

In my next blog, I will write about new role and skill requirements.

The post Hybrid Cloud Cookbook: IT Service Catalog Design appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-it-service-catalog-design/feed/ 1
Hybrid Cloud Cookbook: Avoiding Automation Pitfalls https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-avoid-automation-pitfalls/ https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-avoid-automation-pitfalls/#respond Tue, 02 Feb 2016 10:00:06 +0000 https://infocus.dellemc.com/?p=25750 This post is part of a series of Hybrid Cloud ‘cookbook’ blogs that discuss the key concepts of building and running a true hybrid cloud environment and delivering IT-as-a-Service. Automation is critical and necessary for any digitalized business for DevOps, etc. It’s all about speed, avoiding human error and excess costs. Automation is the non-visible, […]

The post Hybrid Cloud Cookbook: Avoiding Automation Pitfalls appeared first on InFocus Blog | Dell EMC Services.

]]>
This post is part of a series of Hybrid Cloud ‘cookbook’ blogs that discuss the key concepts of building and running a true hybrid cloud environment and delivering IT-as-a-Service.

Automation is critical and necessary for any digitalized business for DevOps, etc. It’s all about speed, avoiding human error and excess costs. Automation is the non-visible, below-the-line part of ITaaS.

Automation is usually done with dedicated tools that range from BPM tools to infrastructure automation tools such as VMware vRealize Automation (vRA). There are also dedicated tools for specific tasks (DevOps tools, etc.), and then there is good old scripting. The newest category of automation tools are robotic process automation tools. This wide variety of choices leads to organizations having multiple automation tools and needing to define which to use for different things. The overall automation tool is often called an orchestration tool, which commands task specific automation tools. For example, VMware vRA writes commands to EMC ViPR for storage provisioning, Puppet for configuration management, etc.

Regardless of which tool and approach you take, there are a few key considerations.

Value of Automation

Automation is a foremost requirement for delivering ITaaS, and it requires a business commitment to achieve. Often times, organizations aren’t always initially thrilled by automation. The most common objection I hear is, “it takes only 3 minutes, there is no sense automating it.” While this could be true in some cases, the decision to automate needs to be based on a few key concepts:

  • Volume of the task
  • Length and complexity of the end-to-end process
  • Time sensitivity of the task (how valuable is immediate completion?)
  • Avoidable human error and improvement of traceability
  • Most importantly, whether or not the service needs to be provided to a non-technical user, such as an end user

An individual usually only sees a small part of the overall process and from that point of view, automation only makes sense if the transaction volume is large. An extreme example of this dilemma was at a pharma company I worked with. We mapped the entire process of provisioning one VM. While the technical provisioning only took minutes, the end-to-end process took 30 days due to the heavy quality assurance processes. In this case, automation would enable them to have pre-approved VMs using a pre-approved automated process, thus significantly reducing the need for Q&A control and dramatically minimizing the process time.

Also, when you start to break up the technical tasks of provisioning a VM, you quickly learn that it’s not just about the VM, but it is about all the services needed around it, including network, security, data protection, etc. Typically these are done inside separate delivery towers, leading to hand offs, which in turn lead to wasted time and money.

Exception Management and Roll Back

While automation is necessary in many cases, there are a couple of areas to watch in order to achieve the optimal economic benefits. First is the biggest dilemma of automation: exception handling. While the human brain is super quick to adapt to changes and utilize logical problem solving, the automation engine does nothing without explicit instructions. Automating normal processes is simple and straightforward, but automating all the associated exceptions and issues is totally another matter. Essentially every single step of your process can fail, and the sources of failure are numerous (wrong input, unknown response in return, timeout, etc.). Robotic automation tools aim to solve this issue through machine learning so that over time, your automation tools will be better at handling the normal exceptions. But the real killer here is the roll back. You don’t just need to automate processes for successful execution, but also for roll back from any point of the process. So, how do you undo the parts you already did?

Maintaining Automation

Another very important component of automation to consider is the maintainability. You have to be able to maintain the process description as the process evolves or as connected IT services change. A poorly designed process description can become your new monolith (you don’t need mainframe and COBOL to achieve that), and you might need an A0 size plotter just to print out the process description. With that size of process description, it is very hard to understand it or debug issues.

Life Cycle View

If you execute a process to implement/provision something, you probably should also do a correlated reverse process for the same. No IT service lives forever, so you need a way to decommission it. This means that a decision to automate something means that you also need to build two separate processes for it, thus making maintenance and execution costs nearly double.

Life_cycle_view

Approaching How to Automate

All of these considerations can lead to an over-engineered solution. Sometimes, a simple quick and dirty solution is the preferred route. In many cases, just getting a process documented into the automation tool is value in itself. However, you most likely will need an iterative approach that includes an initial simple implementation that can be built upon once the task volume increases.

What to automate and how to do it should be based on facts about demand. You need a way to catch the demand signals and that means that you should consider how you organize your IT. You also need to have a process to keep a backlog of requests and decide which ones to implement next. It’s not realistic to aim for 100% automation, so consider where you will still use manual human processes as the service integrators and equip those people with tools, skills, and resources to properly do so.

Summary: Back to Basics

Overall, the basic principles of any application development also apply to automation:

  • Build only for need
  • Break the process into small and manageable parts (microservices principles)
  • Consider the life cycle of the automated process, in addition to the associated decommissioning
  • Avoid over-engineering the solution

Implementing robust automation is not a trivial task. You need to consider and justify the investment before jumping into it. It will be a process where you will make mistakes, throw away old automations which cannot keep up with the changes, and face strong change resistance.  But in today’s world of ITaaS, you have no other option but to master automation.

As a last final thought, here are some very true phrases about automation:

  • “A bad process is still a bad process after you have automated it.”
  • “There is nothing so useless as doing efficiently that which should not be done at all.”
  • “The more complex your infrastructure is, the more complex your automation is, and the more complex problem solving is.”

In my next blog, I will write about different aspects and definition of Service Catalog.

The post Hybrid Cloud Cookbook: Avoiding Automation Pitfalls appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-avoid-automation-pitfalls/feed/ 0
Hybrid Cloud Cookbook: An Evaluation Framework https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-an-evaluation-framework/ https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-an-evaluation-framework/#respond Tue, 26 Jan 2016 10:00:01 +0000 https://infocus.dellemc.com/?p=25786 This post is part of a series of Hybrid Cloud ‘cookbook’ blogs that discuss the key concepts of building and running a true hybrid cloud environment and delivering IT-as-a-Service. The cloud, with all the variants, is a much overused and under delivered word, especially when it comes to the world of enterprise grade hybrid clouds. […]

The post Hybrid Cloud Cookbook: An Evaluation Framework appeared first on InFocus Blog | Dell EMC Services.

]]>
This post is part of a series of Hybrid Cloud ‘cookbook’ blogs that discuss the key concepts of building and running a true hybrid cloud environment and delivering IT-as-a-Service.

The cloud, with all the variants, is a much overused and under delivered word, especially when it comes to the world of enterprise grade hybrid clouds. With this, let me reveal a simple framework to evaluate what we are talking about when it comes to Enterprise Hybrid Cloud.

Cloud

Just implementing an IaaS/Cloud stack does not mean that you have built and are successfully running on a cloud. The simple criteria for cloud are:

  • Standardized and aggregated services – you get well defined infrastructure service which includes all the needed elements (compute, storage, network, security, data protection, etc.)
  • On demand self-service – you can provision and decommission services when you want by using simple portal or API
  • Measured – your consumption is measured and reported
  • Elastic – your consumption can go up and down

In the case of a private cloud, the elasticity can be achieved from the user perspective, but not from the company perspective. All other criteria can and should be met.

Hybrid

Hybrid cloud means that you can do both private and public. Although this is a simple concept, it is not so easy to achieve.

The catch is that implementing separate environments is not yet hybrid. This just means that you have two or more silos which are used in different ways and do not co-operate.

Key criteria for a true hybrid cloud are:

  • Workloads can seamlessly be moved between the private and public cloud
  • Both clouds are used and managed through a common interface and in a common way
  • Same principles apply to both clouds (governance, security, etc.)

Ease of workload migration is one of the attractive promises of hybrid cloud. However, any IT executive who has tried to migrate an application in a traditional data center environment knows that even trivial distinctions such as a difference in versions of layered software can jeopardize the migration. It is highly unlikely that your private and public cloud stacks are the same. In practice, every cloud is a silo and you need to go through normal application migration effort when moving it between clouds. This also holds true in the case of OpenStack.  As a result of the many user-selectable parts in OpenStack, there are rarely two identical OpenStack implementations. Additionally, workload movement is certainly not guaranteed.

You have a couple options for true workload movement. Your first option is that you design and code your application from the ground up to be as IaaS agnostic as possible in terms of IaaS environments. This may also include using a Cloud Foundry-type of PaaS. The second option is that you consider public clouds that have a similar architecture to your own data center infrastructure. For example, VMware vCloud Air runs any workloads that run on your own VMware infrastructure, provided that you have extended your network appropriately.

Enterprise

Enterprise here refers to the set of qualitative criteria your cloud needs to meet. Every company has their own criteria, but the following are some of the most common ones:

  • Service levels – cloud meets agreed service levels (uptime, latency, and performance)
  • Security – authentication, network security, etc.
  • Enforced policies – approvals, data protection rules, etc.

If you have a cloud-native application, SLAs are less critical because your application has been designed and built from the ground up to work in the public cloud with latency and performance changes. The rest of the applications need stable and predictable infrastructure.

With hybrid cloud, the public part needs to be an extension of your network, as the workloads need to be able to access the same services, regardless of the cloud that they run on. This opens up a number of security aspects to consider. It is not that public clouds are always less secure, as they can have better security than your own environment, but you still need to consider and protect.

Equally important are the policies. When you set up a new database for your infrastructure, you also probably set up some sort of backup. For critical data, you ensure that there is data mirroring between sites as well. When you enable self-service, you need to enforce the same policies (in an automated and transparent fashion).

In my next blog, I will write about how to avoid Automation Pitfalls.

The post Hybrid Cloud Cookbook: An Evaluation Framework appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/harrik/hybrid-cloud-cookbook-an-evaluation-framework/feed/ 0