Matt Liebowitz – InFocus Blog | Dell EMC Services https://infocus.dellemc.com DELL EMC Global Services Blog Thu, 18 Apr 2019 17:47:36 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.7 The Transformation Instinct: Nature vs. Nurture https://infocus.dellemc.com/matt-_liebowitz/the-transformation-instinct-nature-vs-nurture/ https://infocus.dellemc.com/matt-_liebowitz/the-transformation-instinct-nature-vs-nurture/#respond Wed, 13 Mar 2019 20:11:49 +0000 https://infocus.dellemc.com/?p=37785 I recently finished a fantastic read called Factfulness. The book, authored by Finnish statistician Hans Rosling, his son Ola and daughter-in-law Anna Rosling Rönnlund, focuses on how the world has changed in terms of health and wealth, and the potential for human progress based on fact versus inherent biases. Aside from being a really great […]

The post The Transformation Instinct: Nature vs. Nurture appeared first on InFocus Blog | Dell EMC Services.

]]>
I recently finished a fantastic read called Factfulness. The book, authored by Finnish statistician Hans Rosling, his son Ola and daughter-in-law Anna Rosling Rönnlund, focuses on how the world has changed in terms of health and wealth, and the potential for human progress based on fact versus inherent biases. Aside from being a really great way to learn critical thinking skills it shows us that, despite the daily onslaught of bad news, the world is actually getting better. Much better in fact!

As I read the book, many things came to my mind about human progress in terms of IT transformation and multi-cloud—my focus here at Dell Technologies. Throughout history, Hans Rosling tells us, there have been leaders who devised and executed an actionable plan to transform their countries and its people out of poverty. Human progress and transformation, however, is ever-evolving and critical to survival not only in life but business. And I have to say, I can’t help relate how Dell Technologies’ new ProConsult Advisory set of services helps our customers develop an actionable plan for transformation that leverages a unique and proven methodology to get there successfully.

I certainly don’t intend to spoil the excellent book (I really think you should read it!), but I thought it worthwhile to further draw comparison to some of Factfulness’ ideology to technology. There are ten “instincts” covered in the book. Here I’ll discuss three of them, though the others could be applied in different ways.

Turn intuition into strength again. —Ola Rosling

The Destiny Instinct: Transformation “Never” Happens

In the book, the authors describe something called The Destiny Instinct. That is, things are one way and destined to remain that way forever without changing. For many things, whether we’re talking about IT transforming into a service provider, adoption of multi-cloud, or countries getting out of extreme poverty, one thing is true: change does happen, but it often occurs slowly.

There was a commercial created by Qwest in 1999 that shows a man checking into a little motel and asking about the quality of the rooms and whether they serve breakfast. When he asks about entertainment, he is told that all rooms have “every movie ever made in every language anytime day or night.” This commercial aired well before services like YouTube, Netflix and Hulu existed. When I first saw it, I never believed any of it would be possible, but 20 years later we’ve gotten pretty close. Major transformation happens, but it occurs slowly.

Transforming IT or adopting a true multi-cloud architecture isn’t something that happens overnight either. Often it takes many small steps, such as adopting Infrastructure as Code principals or migrating workloads to public clouds, along the way to a full transformation. Most importantly, transformation doesn’t happen without a plan. That’s one of the best parts about ProConsult Advisory – customers get a fully actionable and visual plan for transformation that they can actually see and hold in their hands. It’s something they can hang up in their conference room or IT leaders’ offices and see exactly where they are on their journey.

The Urgency Instinct: Transformation Happens, but it Has to Occur NOW

Another instinct discussed in the book is Urgency. That is, something needs to happen immediately because it’s so important!  Or you’ll miss out! Or, if you don’t transform immediately your company won’t survive!

Do these sound like things you’ve heard before?  If so, I’m not surprised.

Transformation is a process that takes time and you need an actionable plan. Organizations can’t go from primarily running off a single private cloud to suddenly adopting a full multi-cloud architecture complete with standardization, automation, operating model changes, and cost visibility.

If you give in to the Urgency Instinct, you’re more likely to buy whatever a vendor tells you to get your business transformed the fastest. In this case if you rush to transform immediately, you may miss important aspects of how your IT organization and your business operates. Your plan to transform should including taking the time to evaluate the goals of your business, the desired business outcomes you want to achieve, and developing a plan for how to get there. Transformation should happen thoughtfully, not urgently.

Knowing that transformation is a process that takes time, we employ ProConsult Advisory to map out a strategy that covers the next several years and highlights the key changes that will be required to get there. Not only do you get a strategy, but you get the actual business case analysis to go with it that’s critical to gaining consensus and buy-in from all levels of the business—IT cannot transform on its own and needs support from across the business.

If you want to change the world, you have to understand it. —Hans Rosling

The Generalization Instinct: Transformation Happens the Same for Everyone

The last instinct from the book I wanted to talk about is the Generalization Instinct. That is, that change is the same regardless of where it’s happening. If you’re a large pharmaceutical company, do you expect your IT transformation journey to look exactly the same as an automotive company? The answer is obviously no, but the curious thing is that many publications and analysts talk about transformation as if everyone’s journey will be the same. Move workloads to the public cloud. Automate everything.  Adopt containers and microservices. The list goes on and on and the truth is many of them are right for many organizations. But not all of them are right for all organizations.

The best way to transform is to fully understand the needs of your business both today and where it needs to go in the future. Transformation for transformation’s sake doesn’t make sense or solve your organization’s challenges. Take multi-cloud for example – utilizing multiple cloud by itself will not automatically solve all customer challenges. But those customers who adopt multi-cloud and perform an analysis to figure out the right cloud for each of their applications (among other analyses needed) will get the best possible experience.

Summary

Crafting a strategy and plan for transformation that is specific to each customer’s business is how ProConsult Advisory is built. We have already worked with customers across all transformation pillars – IT, Workforce, and Application. Recognizing that all customers are different, and no two transformations look the same is a key tenet of ProConsult Advisory, and developing a customized plan is the goal. We want to leave our customers with something that is specific to their business and enables them to act right away.

Have you read Factfulness yet?  Take a listen and a look at how Bill Gates endorses the book.

 

If you have read the book, leave me a comment on what you thought about it and how you think it can apply to your organization’s transformation journey.

Other Sources

Hans Rosling’s 2006 TedTalk “The Best Stats You’ve Ever Seen

Hans and Ola Rosling’s 2014 TedTalk “How Not to Be Ignorant about the World

The Rosling’s Pitch for the Book “Why We Wrote Factfulness

The post The Transformation Instinct: Nature vs. Nurture appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/the-transformation-instinct-nature-vs-nurture/feed/ 0
Microsoft Azure Stack at the Tactical Edge https://infocus.dellemc.com/matt-_liebowitz/microsoft-azure-stack-at-the-tactical-edge/ https://infocus.dellemc.com/matt-_liebowitz/microsoft-azure-stack-at-the-tactical-edge/#respond Mon, 04 Feb 2019 13:00:10 +0000 https://infocus.dellemc.com/?p=37492 By now you’ve probably heard of Microsoft’s Azure Stack solution. The promise of Azure Stack is huge – the ability to take Azure services and run them on-premises using the same toolsets, developer frameworks, and administration that your organization is already familiar with from the Azure public cloud.  Azure Stack provides organizations with a consistent […]

The post Microsoft Azure Stack at the Tactical Edge appeared first on InFocus Blog | Dell EMC Services.

]]>
By now you’ve probably heard of Microsoft’s Azure Stack solution. The promise of Azure Stack is huge – the ability to take Azure services and run them on-premises using the same toolsets, developer frameworks, and administration that your organization is already familiar with from the Azure public cloud.  Azure Stack provides organizations with a consistent hybrid cloud and the ability to develop once and provision either on-premises or in the public Azure cloud quickly and easily. The Dell EMC Cloud for Microsoft Azure Stack takes that one step further by wrapping Azure Stack in an engineered hardware solution leveraging Dell servers and hyperconverged storage, the VxRack AS.

One of the key use cases for Azure Stack is the ability to run cloud workloads and utilize Azure services at the network edge. That keeps data processing close to the source to speed performance of critical applications and provides the foundation for next generation IoT technologies. For most organizations the edge is within their data centers.

What about other organizations for which the edge is, shall we say, beyond the four walls of their data center? What if your edge is a harsh environment, on a battlefield, or even a moving target where the edge changes frequently? We’ve got an Azure Stack for that, too!

Microsoft Azure Stack to the Tactical Edge

Today, Dell EMC is introducing the Dell EMC Tactical Microsoft Azure Stack, the first and only solution to bring the power of Azure Stack to the tactical edge. The solution, initially available in the US, is functionality identical to Dell EMC Cloud for Microsoft Azure Stack but is built to be deployed in scenarios where it would be challenging to run a typical solution. As part of a partnership with Tracewell Systems, it includes Dell EMC servers and networking encased in ruggedized “pods” that are meant to be deployed in harsh conditions. That includes large vehicles, ships that sail on or under the sea, and aircraft that travel all over the world.  It is also sized such that it can be transported from place to place by just two people, making it easy to transport your cloud solution quickly to wherever it’s needed. You can’t exactly do that with a full rack of servers!  The solution is expected to be available later in Q1.

Tactical Microsoft Azure Stack Use Cases

There is already a lot of demand for Azure Stack among government and military customers and I fully expect they’ll find a lot of value in the Tactical Azure Stack solutions.

The military in particular has unique challenges of operations in often difficult environments where Tactical Azure Stack is better suited to run. Like the Dell EMC Cloud for Microsoft Azure Stack, Tactical Azure Stack can be run in a fully disconnected mode without requiring connectivity back to public Azure, making it easy to deploy in submarines and other locations without readily available Internet access or in secure locations where having an Internet connected cloud is not desirable.  Bringing data processing and critical systems to the tactical edge, close to the applications and people that need it, is a huge advantage of this solution.

Beyond military and government, our Consulting team works with customers across other industries where having a portable Azure Stack can be invaluable. Customers like those in the mining or energy industries can see the benefits of deploying Tactical Azure Stack close to their operations without needing a full data center in often challenging locations.  Any organization that needs a fully hybrid cloud solution in a form factor that is easily portable,  and can stand up to harsh conditions can benefit from deploying Tactical Azure Stack.

Our Dell EMC Consulting team has a lot of experience already deploying Azure Stack for government and other secure organizations around the world. We will be applying that experience to the unique requirements of customers that need an Azure Stack solution in a rugged, secure form factor. We’re looking forward to continuing to work with our customers to help achieve their business and operational outcomes and Dell EMC Tactical Microsoft Azure Stack is another tool in the toolbox to help them achieve those goals.

The post Microsoft Azure Stack at the Tactical Edge appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/microsoft-azure-stack-at-the-tactical-edge/feed/ 0
Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part II https://infocus.dellemc.com/matt-_liebowitz/best-practices-for-virtualizing-active-directory-domain-controllers-ad-dc-part-ii/ https://infocus.dellemc.com/matt-_liebowitz/best-practices-for-virtualizing-active-directory-domain-controllers-ad-dc-part-ii/#respond Mon, 15 Oct 2018 09:00:21 +0000 https://infocus.dellemc.com/?p=36098 Virtualized Active Directory is ready for Primetime, Part II! In the first of this two-part blog series, I discussed how virtualization-first is the new normal and fully supported; and elaborated on best practices for Active Directory availability, achieving integrity in virtual environments, and making AD confidential and tamper-proof. In this second installment, I’ll discuss the […]

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part II appeared first on InFocus Blog | Dell EMC Services.

]]>
Virtualized Active Directory is ready for Primetime, Part II!

In the first of this two-part blog series, I discussed how virtualization-first is the new normal and fully supported; and elaborated on best practices for Active Directory availability, achieving integrity in virtual environments, and making AD confidential and tamper-proof.

In this second installment, I’ll discuss the elements of time in Active Directory, touch on replication, latency and convergence; the preventing and mediating lingering objects, cloning and of much relevance, preparedness for Disaster Recovery.

Proper Time with Virtualized Active Directory Domain Controllers (AD DC)

Time in virtual machines can easily drift if they are not receiving constant and consistent time cycles. Windows operating systems keep time based on interrupt timers set by CPU clock cycles. In a VMware ESXi host with multiple virtual machines, CPU cycles are not allocated to idle virtual machines.

To plan for an Active Directory implementation, you must carefully consider the most effective way of providing accurate time to domain controllers and understand the relationship between the time source used by clients, member servers, and domain controllers.

The Domain Controller with the PDC Emulator role for the forest root domain ultimately becomes the “master” timeserver for the forest – the root time server for synchronizing the clocks of all Windows computers in the forest. You can configure the PDC to use an external source to set its time. By modifying the defaults of this domain controller’s role to synchronize with an alternative external stratum 1 time source, you can ensure that all other DCs and workstations within the domain are accurate.

Why Time Synchronization Is Important in Active Directory

Every domain-joined device is affected by time!

Ideally, all computer clocks in an AD DS domain are synchronized with the time of an authoritative computer. Many factors can affect time synchronization on a network. The following factors often affect the accuracy of synchronization in AD DS:

  • Network conditions
  • The accuracy of the computer’s hardware clock
  • The amount of CPU and network resources available to the Windows Time service

Prior to Windows Server 2016, the W32Time service was not designed to meet time-sensitive application needs. Updates to Windows Server 2016 allow you to implement a solution for 1ms accuracy in your domain.

Figure 1: How Time Synchronization Works in Virtualized Environments

See Microsoft’s How the Windows Time Service Works for more information.

How Synchronization Works in Virtualized Environments

An AD DS forest has a predetermined time synchronization hierarchy. The Windows Time service synchronizes time between computers within the hierarchy, with the most accurate reference clocks at the top. If more than one time source is configured on a computer, Windows Time uses NTP algorithms to select the best time source from the configured sources based on the computer’s ability to synchronize with that time source. The Windows Time service does not support network synchronization from broadcast or multicast peers.

Replication, Latency and Convergence

Eventually, changes must converge in a multi-master replication model…

The Active Directory database is replicated between domain controllers. The data replicated between controllers called ‘data’ are also called ‘naming context.’ Only the changes are replicated, once a domain controller has been established. Active Directory uses a multi-master model; changes can be made on any controller and the changes are sent to all other controllers. The replication path in Active Directory forms a ring which adds reliability to the replication.

Latency is the required time for all updates to be completed throughout all domain controllers on the network domain or forest.

Convergence is the state at which all domain controllers have the same replica contents of the Active Directory database.

Figure 2: How Active Directory Replication Works

For more information on Replication, Latency and Convergence, see Microsoft’s Detecting and Avoiding Replication Latency.”

Preventing and Remediating Lingering Objects

Don’t revert to snapshot or restore backups beyond the TSL.

Lingering objects are objects in Active Directory that have been created, replicated, deleted, and then garbage collected on at least the Domain Controller that originated the deletion but still exist as live objects on one or more DCs in the same forest. Lingering object removal has traditionally required lengthy cleanup sessions using various tools, such as the Lingering Objects Liquidator (LoL).

Dominant Causes of Lingering Objects

  1. Long-term replication failures

While knowledge of creates and modifies are persisted in Active Directory forever, replication partners must inbound replicate knowledge of deleted objects within a rolling Tombstone Lifetime (TSL) # of days (default 60 or 180 days depending on what OS version created your AD forest). For this reason, it’s important to keep your DCs online and replicating all partitions between all partners within a rolling TSL # of days. Tools like REPADMIN /SHOWREPL * /CSV, REPADMIN /REPLSUM and AD Replication Status should be used to continually identify and resolve replication errors in your AD forest.

  1. Time jumps

System time jump more than TSL # of days in the past or future can cause deleted objects to be prematurely garbage collected before all DCs have inbound replicated knowledge of all deletes. The protection against this is to ensure that:

  • The forest root PDC is continually configured with a reference time source (including following FSMO transfers).
  • All other DCs in the forest are configured to use NT5DS hierarchy.
  • Time rollback and roll-forward protection has been enabled via the maxnegphasecorrection and maxposphasecorrection registry settings or their policy-based equivalents.
  • The importance of configuring safeguards can’t be stressed enough.
  1. USN rollbacks

USN rollbacks are caused when the contents of an Active Directory database move back in time via an unsupported restore. Root causes for USN Rollbacks include:

  • Manually copying previous version of the database into place when the DC is offline.
  • P2V conversions in multi-domain forests.
  • Snapshot restores of physical and especially virtual DCs. For virtual environments, both the virtual host environment AND the underlying guest DCs should be compatible with VM Generation ID. Windows Server 2012 or later, and vSphere 5.0 Update 2 or later, support this feature.
  • Events, errors and symptoms that indicate you have lingering objects.

Figure 3: USN Rollbacks – How Snapshots Can Wreak Havoc on Active Directory

Cloning

You should always use a test environment before deploying the clones to your organization’s network.

DC Cloning enables fast, safer Domain Controller provisioning through clone operation.

When you create the first domain controller in your organization, you are also creating the first domain, the first forest, and the first site. It is the domain controller, through group policy, that manages the collection of resources, computers, and user accounts in your organization.

Active Directory Disaster Recovery Plan: It’s a Must

Build, test, and maintain an Active Directory Disaster Recovery Plan!

AD is indisputably one of an organization’s most critical pieces of software plumbing and in the event of a catastrophe – the loss of a domain or forest – its recovery is a monumental task. You can use Site Recovery to create a disaster recovery plan for Active Directory.

Microsoft Active Directory Disaster Recovery Plan is an extensive document; a set of high-level procedures and guidelines that must be extensively customized for your environment and serves as a vital point of reference when determining root cause and how to proceed with recovery with Microsoft Support.

Summary

There are several excellent reasons for virtualizing Windows Active Directory. The release of Windows Server 2012 and its virtualization-safe features and support for rapid domain controller deployment alleviates many of the legitimate concerns that administrators have about virtualizing AD DS. VMware® vSphere® and our recommended best practices also help achieve 100 percent virtualization of AD DS.

Please reach out to your Dell EMC representative or checkout Dell EMC Consulting Services to learn how we can help you with virtualizing AD DS or leave me a comment below and I’ll be happy to respond back to you.

Sources

Virtualizing a Windows Active Directory Domain Infrastructure

Related Blog

Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part II appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/best-practices-for-virtualizing-active-directory-domain-controllers-ad-dc-part-ii/feed/ 0
Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I https://infocus.dellemc.com/matt-_liebowitz/best-practices-virtualizing-active-directory-domain-controllers-ad-dc-part-i/ https://infocus.dellemc.com/matt-_liebowitz/best-practices-virtualizing-active-directory-domain-controllers-ad-dc-part-i/#respond Mon, 17 Sep 2018 09:00:43 +0000 https://infocus.dellemc.com/?p=36080 Virtualized Active Directory is ready for Primetime! In today’s technology climate, monitoring for changes should be part of the organization’s security culture. Your IT team knows the importance of securing the network against data breaches from external threats, however, data breaches from inside the organization represent nearly 70% of all data leaks[1]. Are you doing […]

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I appeared first on InFocus Blog | Dell EMC Services.

]]>
Virtualized Active Directory is ready for Primetime!

In today’s technology climate, monitoring for changes should be part of the organization’s security culture. Your IT team knows the importance of securing the network against data breaches from external threats, however, data breaches from inside the organization represent nearly 70% of all data leaks[1].

Are you doing enough to prevent the data leaks? Enter Active Directory Domain Services (AD DS).

“Virtualize-First” Is the New Normal

Reasons to virtualize Active Directory Domain Controllers.

As the prominent directory service and authentication store, AD DS comprises the majority of network infrastructures, a business critical application (BCA). It provides the methods for storing directory data and making this data available to network users and administrators, thereby storing information about user accounts – names, passwords, phone numbers, etc. – and enables other authorized users on the same network to access this information.

In much the same way that the criticality of AD DS differs in organizations, so does the acceptance of virtualizing this service. More conservative organizations choose to virtualize a portion of the AD DS environment and retain a portion on physical hardware. This proclivity stems from the complexity of timekeeping in virtual machines, deviation from current build processes or standards, the ability to keep an AD Flexible Single Master Operations (FSMO) role physical, privilege escalation, and fear of a stolen .vmdk.

Figure 1: Common Objections to Domain Controller Virtualization

But fear not!

The release of Windows Server 2012 (and Windows Server 2016) and its virtualization-safe features and support for rapid domain controller deployment alleviates many of the legitimate concerns that administrators have about virtualizing AD DS. VMware® vSphere® and our recommended best practices also help achieve 100 percent virtualization of AD DS.

Best Practices for Active Directory (AD) Availability

Active Directory is the cornerstone to every environment   when Active Directory comes to a halt, everything connected does too.

Since many domain controller virtual machines may be running on a single VMware ESXI host, eliminating single points of failure and providing a high-availability solution will ensure rapid recovery. VMware provides solutions for automatically restarting virtual machines. If a VMware ESXi goes down, VMware High Availability (HA) can automatically restart a domain controller virtual machine on one of the remaining hosts, preventing loss of Active Directory. Using configuration options, you can prioritize the restart or isolation status for individual virtual machines. VMware also allows you to specify a priority for restarting virtual machines. For example, it is important for domain controllers functioning as global catalog servers to be online before your Exchange Server environment initializes. It is always a best practice to set your domain controller virtual machines as high-priority servers.

Additionally, you can implement a script to restart a virtual machine via a loss-of-heartbeat alarm through vCenter. You can accomplish this using a script (available with the VI Perl Toolkit or the VMware Infrastructure SDK 2.0.1) and combined with VMware Distributed Resource Scheduler (DRS), ensure that domain controllers from the same domain always reside on different VMware ESXi hosts to prevent placing all the domain controllers in one basket. The anti-affinity rules let you specify which domain controllers must stay together and which must be separated.

For guidance, follow Microsoft Operations Master Role Placement Best Practices or Dell EMC’s recommended practices.

Achieving Active Directory (AD) Integrity in Virtual Environments

Performing consistent system state backups eliminates hardware incapability when performing a restore, and ensures the integrity of the Active Directory database by committing transactions and updating database IDs. 

For success in implementing Active Directory in the virtual environment, you must ensure a successful migration from the physical environment to the virtual environment. Since Active Directory is heavily dependent on a transaction-based datastore, you must guarantee integrity by making sure there is a solid, reliable means of providing accurate time services to the PDC Emulator and other domain controllers throughout the Active Directory forest.

Network performance is another key to success in a virtual Active Directory implementation, since slow or unreliable network connections can make authentication difficult. Modifying DNS weight and priority to reduce load on the primary domain controller assists can help improve network performance. Because Active Directory depends on reliable replication, ensure continuity by using replmon to monitor it. Also, continue regular system state backups, and always restore from a system state backup. Virtual machines make it easy to move domain controllers; use VMware High Availability (HA) and VMware Distributed Resource Scheduler (DRS) so that no critical domain controllers are on a single host.

Practice the art of disaster recovery regularly. Finally, always go back and re-evaluate your strategies; monitor results for improvements and make adjustments when necessary.

Making Active Directory Confidential and Tamper-proof

Assessments in organizations that have experienced catastrophic or compromised events usually reveal they have limited visibility into the actual state of their IT infrastructures, which may differ significantly from their “as documented” states. These variances introduce vulnerabilities that expose the environment to compromise, often with little risk of discovery until the compromise has progressed to the point at which the attackers effectively “own” the environment.

Detailed assessments of these organizations’ AD DS configuration, public key infrastructures (PKIs), servers, workstations, applications, access control lists (ACLs), and other technologies reveal gaps in administrative practices, misconfigurations and vulnerabilities that, if remediated, could have prevented compromise and in extreme cases, prevented attackers from establishing a foothold in the AD DS environment.

See Microsoft’s Monitoring Active Directory for Signs of Compromise for further insights.

Figure 2: 4 tips for General Practices for Active Directory Confidentiality

Summary

There are several excellent reasons for virtualizing Windows Active Directory. Virtualization offers the advantages of hardware consolidation, total cost of ownership reduction, physical machine lifecycle management, mobility and affordable disaster recovery and business continuity solutions. It also provides a convenient environment for test and development, as well as isolation and security.

Stay tuned for part II of this blog series where I’ll address proper time and synchronization with virtualized AD DC, replication, latency and convergence; preventing and remediating lingering objects, cloning, and disaster recovery.

Please reach out to your Dell EMC representative or checkout Dell EMC Consulting Services to learn how we can help you with virtualizing AD DS or leave me a comment below and I’ll be happy to respond back to you.

Sources

Virtualizing a Windows Active Directory Domain Infrastructure

Microsoft’s Avenues to Compromise

[1] Statista.com Data Breaches Recorded in the U.S. by Number of Breaches and Records Exposed

Related Blog

Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part II

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/best-practices-virtualizing-active-directory-domain-controllers-ad-dc-part-i/feed/ 0
Multicloud is the New Reality https://infocus.dellemc.com/matt-_liebowitz/multi-cloud-is-the-new-reality/ https://infocus.dellemc.com/matt-_liebowitz/multi-cloud-is-the-new-reality/#respond Mon, 23 Apr 2018 08:55:21 +0000 https://infocus.dellemc.com/?p=34994 In my many years in the IT industry I’ve seen many new industry buzzwords come out and then immediately become adopted by everyone. Words like virtualization and cloud were used by everyone and vendors would rush to say their products were “cloud ready” or “optimized for cloud” to capture that excitement. Suddenly everyone’s virtualized environment […]

The post Multicloud is the New Reality appeared first on InFocus Blog | Dell EMC Services.

]]>
In my many years in the IT industry I’ve seen many new industry buzzwords come out and then immediately become adopted by everyone. Words like virtualization and cloud were used by everyone and vendors would rush to say their products were “cloud ready” or “optimized for cloud” to capture that excitement. Suddenly everyone’s virtualized environment became a cloud even if they were called a virtual infrastructure just the week prior. We’re seeing more of that today with new buzzwords like blockchain, IoT, and others. In my world the new hotness is “multicloud.”

It’s true – hybrid cloud has become old and busted and the new hotness is multicloud (well done, if you got my 90s movie reference). In many cases folks conflate the terms hybrid cloud and multicloud, thinking they actually mean the same thing. The truth is that both hybrid cloud and multicloud are separate terms and both are equally important to an organization’s IT strategy.

Multicloud may be a new buzzword but where there’s smoke there’s fire. In the RightScale 2018 State of the Cloud Report, they found that 81% of organizations have a multicloud strategy. That shows organizations are taking multicloud seriously and seeing that adopting a cloud strategy that looks holistically across clouds is the future. Perhaps more importantly, the report found that organizations are already using five clouds today.

If an organization uses 5 clouds, does that mean they’ve adopted a multicloud strategy? Is it as simple as using multiple clouds for your infrastructure and applications?

As with most things in IT, and in life, it isn’t quite that simple. If you simply use multiple clouds for different purposes without tying them together then you’ve likely just created new silos that increase management costs and introduce risk.

In order to properly tie multiple clouds together you need to consider a few elements.

  • Embrace a cloud-first operating model
  • Control your destiny
  • Adopt an actionable strategy

When organizations embrace a cloud-first operating model, they can move more quickly to implement new ideas, lower overall complexity and risk, and create systems that are transparent and efficient. The people and process portion of multicloud is absolutely critical to success as you solve this with technology alone. Having an operating model that allows for DevOps, cost visibility of workloads, and uses a service management framework is absolutely necessary to success in multicloud.

Next, organizations need to control their own destiny in choosing the cloud infrastructure that supports their goals and business objectives. The cloud infrastructure organization’s choice needs to be able to be deployed quickly, be integrated across compute/storage/networking, and should support cloud access. The right cloud infrastructure can be used to create single interfaces that allow managing and provisioning of resources in a hybrid cloud model (see – hybrid cloud isn’t so old and busted after all).  Tools can be used to centralize cloud access, perform cost analysis, and simplify cloud consumption.

Finally, organizations need to adopt an actionable strategy that is aligned to their desired business outcomes. This strategy needs to consider:

  • The infrastructure that will be used to support their cloud initiatives
  • The applications that will either be moved to the new cloud platforms or ultimately retired or refactored into cloud-native applications
  • The operating model, tightly integrated with the business, brings this all together

The post Multicloud is the New Reality appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/multi-cloud-is-the-new-reality/feed/ 0
Delivering Modern Applications with Azure Stack https://infocus.dellemc.com/matt-_liebowitz/delivering-modern-applications-with-azure-stack/ https://infocus.dellemc.com/matt-_liebowitz/delivering-modern-applications-with-azure-stack/#respond Sun, 25 Mar 2018 16:21:18 +0000 https://infocus.dellemc.com/?p=34571 Accelerate Your Digital Transformation with Dell EMC Cloud for Microsoft Azure Stack Many vendors in the cloud world are trying to approach the challenge of providing an easy way to deploy applications on-premises in a private cloud as well as off-premises into a public cloud. Microsoft’s approach is to provide a common interface, development framework, […]

The post Delivering Modern Applications with Azure Stack appeared first on InFocus Blog | Dell EMC Services.

]]>
Accelerate Your Digital Transformation with Dell EMC Cloud for Microsoft Azure Stack

Many vendors in the cloud world are trying to approach the challenge of providing an easy way to deploy applications on-premises in a private cloud as well as off-premises into a public cloud. Microsoft’s approach is to provide a common interface, development framework, and automation engine to deploy applications either in the public cloud with Azure or in the private, on-premises cloud with their Azure Stack solution. The industry response, unsurprisingly, has been largely positive.

Being able to develop an application once and deploy it anywhere without needing to modify or re-work it is a key value proposition of Azure Stack. This capability makes Azure Stack popular not only for customers who already are heavy consumers of Azure public but also service providers who are looking to offer Azure services to their customers.

Dell EMC Cloud for Microsoft Azure Stack solution enables customers to run Azure Stack software on Dell EMC hardware

Azure Stack can provide an ‘easy button’ for those that already use Microsoft Azure and the many cloud services that it provides. Organizations don’t need to train their developers on new tools or IT administrators/operators on a new system they have to manage and maintain. Azure Stack provides agility to organizations that are looking to deploy new applications quickly and consistently across their organization and to their customers.

Here at Dell EMC, we offer our Dell EMC Cloud for Microsoft Azure Stack solution to enable our customers to run the powerful Azure Stack software on proven Dell EMC hardware. Our solution also goes beyond simply providing a hardware platform on which to run the Azure Stack software. We enable our customers to provide a complete cloud solution to their customers or end users.

To do this, we:

  • Deliver a hardware platform that is fully tested and integrated with the Azure Stack software to provide a fully functional, supported, and trusted cloud platform.
  • Integrate our own industry leading solutions for things like data protection and security directly into the solution to provide additional functionality to meet our customer’s business objectives.
  • Provide our experienced Consulting team who can customize the solution for our customers.

Dell EMC and Microsoft are holding a series of events to help our customers better understand the capabilities of Azure Stack and start thinking about how they may adopt it in their own environments. These events will cover an overview of the solution, potential use cases for the Azure Stack platform, and then a live demo of the Azure Stack solution. Our Consulting team (myself included) will be at these events talking about how we can help customers deploy their modern applications on a fully automated and integrated Azure Stack cloud.

I truly hope you can make it out to one of these upcoming events to see the power of Azure Stack and what it can bring to your organization.


Your Exclusive In-Person Invitation to Learn More about Dell EMC Cloud for Microsoft Azure in a City Near You

Extend your investment in Azure to deliver consistent end-user experiences wherever the data and applications reside. Dell EMC Cloud for Microsoft Azure Stack allows you to experience true application and workload portability — on both the public Azure cloud and within the data center.

Join solutions experts and technology leaders from Microsoft and Dell EMC to learn how, with a fully-integrated Azure Stack platform, you can:

  • Improve delivery time for new applications and services with a turnkey infrastructure platform.
  • Become an IT-as a-service broker, elevating IT importance to the business.
  • Meet the demands of regulatory compliance and customer data privacy.

You will also participate in an Azure Stack demonstration and an interactive discussion about use cases. We look forward to meeting with you!

Click the links below to register now!

Thursday, April 5th (Santa Clara, CA)

Tuesday, April 24th (Denver, CO)

Click here for the Solution Overview of Dell EMC Cloud for Microsoft Azure Stack.

 

The post Delivering Modern Applications with Azure Stack appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/delivering-modern-applications-with-azure-stack/feed/ 0
Virtualize Active Directory, the Right Way! https://infocus.dellemc.com/matt-_liebowitz/virtualize-active-directory-right-way/ https://infocus.dellemc.com/matt-_liebowitz/virtualize-active-directory-right-way/#respond Thu, 17 Aug 2017 11:35:07 +0000 https://infocus.dellemc.com/?p=32134 Virtualizing Microsoft Active Directory domain controllers, and business critical applications in general, is near and dear to my heart.  I firmly believe that there are almost no applications left that can’t be virtualized, and this session gives me an opportunity to share my experiences and help others become successful. Business critical applications have become, for […]

The post Virtualize Active Directory, the Right Way! appeared first on InFocus Blog | Dell EMC Services.

]]>
Virtualizing Microsoft Active Directory domain controllers, and business critical applications in general, is near and dear to my heart.  I firmly believe that there are almost no applications left that can’t be virtualized, and this session gives me an opportunity to share my experiences and help others become successful. Business critical applications have become, for the most part, the last applications and servers that are still physical for many organizations. Getting as to close to 100% virtualization as possible is an important goal to strive for.

Why is that important? Another firmly held belief of mine is that virtualization is truly the on-ramp to the cloud. By virtualizing even your organization’s most important workloads, you take one step closer to a future state where you can start taking advantage of cloud computing in your organization.

Of course, simply having a virtual infrastructure doesn’t mean you have a cloud. Having a true hybrid cloud involves additional components to facilitate automation, orchestration and to provide users with that service catalog where they can consume IT resources on a self-service basis. Virtualizing your organization’s servers makes it easier to start layering in those cloud components, and once in place you’ll want even your business critical servers virtualized, so you can start taking advantage of what a true hybrid cloud has to offer.

It’s that time again – the annual VMworld conferences.  This is my 13th VMworld!

Twitter Image - VMworld Realize ThemeThis year I’m presenting a session called, “Virtualizing Active Directory: The Right Way!” on Tuesday, Aug 29, 4:00 p.m. – 5:00 p.m.  It was a top 10 session last year, so if you’re at the conference, come by early to get a good seat.  Bring your copy of Virtualizing Microsoft Business Critical Applications on VMware vSphere or VMware vSphere: Performance and I’ll be happy to sign it.  Let me (@mattliebowitz) know what you think of the session, the book or the conference.
If you’re walking down the halls at VMworld and happen to see someone who looks like former VMware CEO Paul Maritz, stop him and say hi.  It’s probably me!

The post Virtualize Active Directory, the Right Way! appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/virtualize-active-directory-right-way/feed/ 0
Does Enterprise Hybrid Cloud Fulfill the Promise of “True” Hybrid Cloud? https://infocus.dellemc.com/matt-_liebowitz/enterprise-hybrid-cloud-fulfill-the-promise-of-true-hybrid-cloud/ https://infocus.dellemc.com/matt-_liebowitz/enterprise-hybrid-cloud-fulfill-the-promise-of-true-hybrid-cloud/#comments Mon, 10 Oct 2016 12:00:27 +0000 https://infocus.dellemc.com/?p=28886 Late last year I read a great article from Wikibon called “True” Private Cloud will begin shipping to the market in 2016. I really liked how their definition of private cloud matched up with the capabilities and structure of our own Enterprise Hybrid Cloud. As I sit here on this long flight from New Jersey […]

The post Does Enterprise Hybrid Cloud Fulfill the Promise of “True” Hybrid Cloud? appeared first on InFocus Blog | Dell EMC Services.

]]>
Late last year I read a great article from Wikibon called “True” Private Cloud will begin shipping to the market in 2016. I really liked how their definition of private cloud matched up with the capabilities and structure of our own Enterprise Hybrid Cloud. As I sit here on this long flight from New Jersey to Las Vegas for VMworld 2016, I decided to revisit that article and see how well it’s stood up in 2016 and if our Enterprise Hybrid Cloud really meets their definition of True Private Cloud. And, more importantly, talk about why it’s important to have hybrid as part of your cloud strategy.

Comparing True Private Cloud to Enterprise Hybrid Cloud

To start off, let’s look at how Wikibon defines True Private Cloud and how it compares to Enterprise Hybrid Cloud.

Converged infrastructure

“Built with a foundation of converged (or hyperconverged) infrastructure, that can be highly automated and managed as logical pools of compute, network and storage resources.”

Since the release of Enterprise Hybrid Cloud in 2014, our company has supported converged solutions like Vblock and VxBlock as the platform of choice. We’ve also supported a “bring your own” model where a customer can choose their own hardware and, provided it meets the requirements, our services team helps the customer convert it to Enterprise Hybrid Cloud.

Despite that, the vast majority of our customers have gone down the route of converged infrastructure. Why? Customers get it. They know that converged infrastructure is the fastest path to success, simplifying the architecture while providing a powerful and supported combination of industry leading technologies.

Self-service

“Enables end users (developers, line-of-business, etc.) to have self-service access to resource pools and have visibility to internal costs or IT chargeback pricing.”

It’s true: you can provide powerful hardware to run your cloud. But the truth is, if IT consumers can’t easily get access to your cloud solution, they’re going to find love in the arms of another cloud. A true private/hybrid cloud needs to provide that same self-service provisioning and cost visibility that public clouds provide.

The Enterprise Hybrid Cloud leverages the power of VMware’s vRealize Suite to provide powerful self-service capabilities and cost visibility back to the business.  That suite of software gives them a powerful self-service catalog and orchestration engine, a tool to monitor performance in the environment, and cost visibility for the resources consumed.  Combined with the extensive engineering that went into creating Enterprise Hybrid Cloud and it provides customers with a very functional cloud solution.

One-stop shopping for support

“A single point of purchase, support, maintenance, and upgrade for a pre-tested and fully maintained complete solution (a single throat to choke).”

As a technologist it’s often easy to get caught up in the “speeds and feeds” of a cloud solution. While that may be technically interesting, the thing that CIOs care about is driving business value from IT. Creating a cloud from scratch is a daunting task for customers and the fact that Enterprise Hybrid Cloud has been created with thousands of hours of engineering effort makes it a very compelling solution. Customers know when they unwrap their Enterprise Hybrid Cloud it’s not an “assembly required” platform. They know it’ll be delivered quickly and be ready to go “out of the box,” quickly driving business value right away instead of months in the future.  Again, customers get it.

There are other pieces of Wikibon’s definition of True Private Cloud that I encourage you to read, but you might be wondering why I’m talking about Enterprise Hybrid Cloud in the context of private cloud. Maybe I’m hopeful next year Wikibon will change their definition to True Hybrid Cloud?

The key to a successful hybrid cloud implementation

The fact is customers need to adopt a solution that has both private cloud capabilities and public cloud capabilities. The key to making the hybrid model successful is to use a platform that provides hybrid functionality along with private. If IT tells its developers to go to one tool for on-premises and another tool for off-premises it’s likely to end badly.

Most developers or end users don’t care where their workload is provisioned. They care about things like performance characteristics, capabilities, and cost (to name a few). Making all of this visible for both public and private clouds all from the same interface allows the consumer of cloud resources to make the decision based on the needs of the business and not the limitations of the technology. We know customers want this, and we listen to our customers.

Enterprise Hybrid Cloud supports “out of the box” integration with public cloud provides like VMware vCloud Air and Amazon Web Services. In the future we’ll see even more public clouds supported, providing customers with the choices they need to enable them to make the decisions that are right for their business.

In closing, I think the Wikibon article does a great job of defining not only private cloud but also hybrid cloud. And I’m also happy to see that Enterprise Hybrid Cloud “checks the boxes” of private cloud while also providing hybrid capabilities that our customers are asking for.

The post Does Enterprise Hybrid Cloud Fulfill the Promise of “True” Hybrid Cloud? appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/enterprise-hybrid-cloud-fulfill-the-promise-of-true-hybrid-cloud/feed/ 1
Tips for Unlocking Business Value with Cloud https://infocus.dellemc.com/matt-_liebowitz/tips-unlocking-business-value-cloud/ https://infocus.dellemc.com/matt-_liebowitz/tips-unlocking-business-value-cloud/#respond Mon, 03 Oct 2016 11:00:43 +0000 https://infocus.dellemc.com/?p=28889 As I’ve talked about both with customers and in previous blog posts, cloud needs to drive business value. CIOs are not interested in deploying cloud because they read a blog post about it or because Gartner says they should. Ultimately they understand that the world of technology is changing and people are increasingly expecting a […]

The post Tips for Unlocking Business Value with Cloud appeared first on InFocus Blog | Dell EMC Services.

]]>
As I’ve talked about both with customers and in previous blog posts, cloud needs to drive business value. CIOs are not interested in deploying cloud because they read a blog post about it or because Gartner says they should.

Ultimately they understand that the world of technology is changing and people are increasingly expecting a self-service model in everything they do. This is true whether they’re downloading an application on their smartphone, calling for a car, or provisioning IT resources. CIOs need to adopt this model (IT as a Service) to help bring value to the business and drive the necessary outcomes of the business.

What does it actually mean to drive business value? It sounds really good to say it and people think you’re smart, but obviously there’s more to it than that. Let’s look at some examples of how cloud drives business value for customers.

It’s all about the applications

For those of us in technology we sometimes spend a little too much time thinking about the hardware in our solutions. I’ll admit, I’m guilty of it, too. When a new smartphone is being released I’m always interested in how much RAM it has and how many CPU cores it has—as if I’m going to run virtual machines on it (I totally would, if I could, but that’s beside the point). When you think about it, what good is the extra RAM or CPU power in a smartphone (or cloud) if it doesn’t run the software you need. It all comes down to the applications.

One way to extract business value out of cloud is a simple “lift and shift” of your application workloads into the cloud. The inherent capabilities of a cloud like Enterprise Hybrid Cloud, including self-service management, cost visibility, and backup as a service brings capabilities that were likely not there previously. That does bring some value, but dropping an application into a cloud doesn’t typically provide enhanced automation and orchestration at the application level. Making the application owners more nimble and providing capabilities beyond what were available prior to moving applications to the cloud is when you really start driving value.

Enterprise application blueprints drive value

EMC has invested thousands of hours of engineering the Enterprise Hybrid Cloud platform, creating integration with Dell EMC products and providing lots of great functionality. One area where significant engineering effort was spent was in creating a set of application blueprints for common enterprise applications. These include Microsoft Exchange Server, SQL Server, and the Oracle database platform just to name a few.

We hear from customers all the time that their teams want database as a service (DBaaS), allowing developers to more quickly provision and manage databases for the applications they’re writing. The Engineered Blueprints for Microsoft SQL Server opens the door to DBaaS by allowing our Dell EMC Services team to drop in a set of fully engineered blueprints for SQL Server that provides DBaaS capabilities. For example, these blueprints allow provisioning and de-provisioning of individual databases, database instances, or even entire database servers. End users scan also backup and restore databases on demand, freeing them to work more quickly and not have to wait for IT or DBAs to perform many of these functions for them.

It doesn’t just stop at SQL Server. These Engineered Blueprints can provide functionality for Exchange Server, like email as a service, automated provisioning of highly available email infrastructures, and backup/recovery on demand. Similar functionality is available for SharePoint Server, Oracle, and SAP applications, too.

By giving application users and developers access to capabilities they didn’t have before, they become more agile and efficient. When businesses can create the applications or enterprise systems they need to compete and to provide products and services to their customers, that’s when real value is unlocked.

Give the people what they want

The right tool for the job is important—whether you’re building a house, repairing a car, or working with an organization’s application portfolio. Most organizations today have a mix of “off-the-shelf” applications, “home-grown” applications, and more modern applications built in cloud-native frameworks. One tool or technology is not necessarily right for all of those workloads as their needs and requirements are different. We believe in providing choice to our customers, and that’s no different here.

Enterprise Hybrid Cloud is a fantastic platform for the off-the-shelf enterprise applications from vendors like Microsoft, Oracle, and others. It’s built from the ground up for this class of application. As I described it above, Enterprise Hybrid Cloud has the capabilities to provide real business value. Dell EMC also has another cloud solution called Native Hybrid Cloud for those customers that are writing the next generation of cloud-native applications. Native Hybrid Cloud provides a turnkey cloud platform leveraging a scale-out architecture and platform integration with Pivotal Cloud Foundry to give developers a platform that accelerates their creation of the next generation of applications.

If customers try to cram a square peg into a round hole and use the wrong platform for the job, it becomes much more difficult to unlock the value that the business needs. Both Native Hybrid Cloud and Enterprise Hybrid Cloud are designed to be delivered quickly from our Dell EMC Services team in order to enable our customers to quickly see the value of their investments.

Listen and learn

One of the most important part of any of our cloud projects is talking to our customers to understand their goals, business objectives, and outcomes they’re trying to achieve. It sounds obvious but it’s true – our goal is not to simply drop off a cloud solution in a “one size fits all” manner. We sit down with our customers to understand where they’re going, and then, using our cloud solutions as a foundation, work together to craft an architecture that meets their goals. It’s very consultative and is outcome-focused.

We want to help our customers achieve their goals and build a lasting relationship. We wouldn’t be successful if we approached cloud as a single solution for everyone, as not all organizations measure the business value derived from their IT investments in the same way. We listen, learn and adapt based on the requirements of all of our customers.

We’re in this together with our customers in marching towards a “cloudy” future. Our goal is to provide solutions that help our customers solve their business problems. It’s an exciting time to be part of our Dell EMC Services team!

The post Tips for Unlocking Business Value with Cloud appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/tips-unlocking-business-value-cloud/feed/ 0
From Factory Automation to Cloud Automation https://infocus.dellemc.com/matt-_liebowitz/from-factory-automation-to-cloud-automation/ https://infocus.dellemc.com/matt-_liebowitz/from-factory-automation-to-cloud-automation/#respond Mon, 22 Aug 2016 12:47:16 +0000 https://infocus.dellemc.com/?p=28636 My six-year-old son loves the show How It’s Made on the Science Channel. There are usually many episodes back-to-back on Sunday mornings, and he’ll be up at 7 a.m. or earlier ready to watch. I’ll usually sit down and watch the episodes with him and am fascinated by the automation that goes into creating some […]

The post From Factory Automation to Cloud Automation appeared first on InFocus Blog | Dell EMC Services.

]]>
My six-year-old son loves the show How It’s Made on the Science Channel. There are usually many episodes back-to-back on Sunday mornings, and he’ll be up at 7 a.m. or earlier ready to watch. I’ll usually sit down and watch the episodes with him and am fascinated by the automation that goes into creating some of the things we use in our everyday lives.

As I thought more about it, I thought about how it’s not that different from when we work with our customers to adopt cloud computing. These customers need to look at their operational procedures, processes, and how they run their business and begin to identify areas where automation can bring about real savings. After all, cloud is not very useful unless you can automate your IT processes and then offer it out as a service to users and customers. Let’s take a look at some of the lessons we can learn from these big factories that automate their assembly lines to create the products we use.

How did they figure that out?

One thing that I always end up saying to myself while watching the show is something along the lines of, “How did they figure out all of these complex procedures?” In other words, how do they know that the metal they’re working with needs to go into an oven that cooks at a specific temperature for a set period of time in order to harden it properly? How do they know exactly how much ice cream to portion out for each ice cream sandwich?

The answer, in all cases, is that the company who designed the product fully understands what is involved in the process of creating the product. It sounds pretty simple and obvious, but unfortunately many IT departments don’t follow this same logic when they approach automation. IT departments understand they need to automate the process, but in their rush to do so they don’t fully understand all of the implications.

A relatively basic example of this would be the deployment of application workloads. Before cloud and automation, the requestor would typically request the server from IT. Now that most workloads are virtualized, it’s relatively easy for IT to create a new virtual machine. And they can often complete this in a matter of hours or days. In some case they’ll log the entry into a CMDB or update a ticket in an ITSM system and then hand the server off to the requestor.

In that scenario, IT may not have any sense of what happens after the server is deployed. Does the software being installed have any special licensing considerations? What is the expected lifecycle of the server? Does it need to integrate into any other systems? Simply put, if they don’t have the answers to these (and likely other) questions, how can IT be expected to properly automate the deployment of that application? IT needs to work with application owners, developers, and other stakeholders, in order to fully understand what is required before trying to automate the application workloads. By working together with the people who will be using the application they can properly automate it and bring real value to the business.

They use the right tool for the job

In the factory some of the tools used to create products are custom-made, and others can be repurposed. The same factory that makes packaged turkey can likely also create other packaged foods due to the similarities in requirements for automating those processes. Likewise, in the world of cloud, there are many different tools available to automate functions and choosing the right one is crucial to making the most of your investment.

When we work with customers deploying Enterprise Hybrid Cloud, we spend a lot of time up-front understanding the customer’s current state. What tools do they have in place? What skills does the customer’s IT team have, and what technologies can they support? Gathering this information helps us recommend the best solution for each customer. After all, why write a script for something when an existing tool might already be available?

Our customers are often already using tools like Puppet or Chef that can provide key functionality for cloud automation and orchestration. For integrating with third-party systems, there may be existing plug-ins for tools like vRealize Orchestrator that provide this functionality. And, of course, there are other systems that require custom-written scripts to properly automate functions.

When picking the right tool for the job, organizations need to consider many factors. We help them figure that out up-front so they can see real, tangible benefits and savings with Enterprise Hybrid Cloud.

Automate everything?

Occasionally How It’s Made will show certain items being created by hand. In many cases this is due to the precision required for what they’re making. There may be another important reason that the show leaves out: scale.

Just because something can be automated doesn’t necessarily mean it can scale. And, more importantly, just because something can be automated doesn’t mean that it should be automated. It may cost more money, time, and effort to create the automation than the benefit customers and end users will realize. The big factories know and understand this, and IT needs to as well. Fully understanding everything that goes into automating your processes, or making ice cream sandwiches, will allow organizations to get the most benefit.

Enterprise Hybrid Cloud brings real value to customers by helping them package up and deliver IT as a service, and automation is a key element of that. By fully understanding what needs to be done to properly automate something—knowing what tools you have available at your disposal and making decisions around the value of automating processes—organizations can derive real value from their cloud investments.

The post From Factory Automation to Cloud Automation appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/from-factory-automation-to-cloud-automation/feed/ 0