InFocus Blog | Dell EMC Services https://infocus.dellemc.com DELL EMC Global Services Blog Tue, 09 Oct 2018 11:53:57 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.7 3 Surprising Video Trends that Should Inform Your L&D Strategy https://infocus.dellemc.com/scott_pinzon/3-surprising-video-trends-that-should-inform-your-ld-strategy/ https://infocus.dellemc.com/scott_pinzon/3-surprising-video-trends-that-should-inform-your-ld-strategy/#comments Tue, 09 Oct 2018 09:00:21 +0000 https://infocus.dellemc.com/?p=36278 Imagine a cattle stampede that continues for five years, and you’ve also pictured how the populace has stampeded from text to video. According to YouTube’s Press page, people watched a lot of YouTube video in 2013. In 2014, they watched three times as much as they did in 2013. In 2015, the numbers tripled again. […]

The post 3 Surprising Video Trends that Should Inform Your L&D Strategy appeared first on InFocus Blog | Dell EMC Services.

]]>
Imagine a cattle stampede that continues for five years, and you’ve also pictured how the populace has stampeded from text to video. According to YouTube’s Press page, people watched a lot of YouTube video in 2013. In 2014, they watched three times as much as they did in 2013. In 2015, the numbers tripled again.

The masses aren’t merely watching video. They’re turning to online video as their preferred method of learning, whether the topic is how to do math or how to use a chainsaw. This mass transition to educational videos has dragged corporate Learning and Development departments into the video-production business – and if you’re a corporate L&D pro with no background in video, you’re having to glean knowledge along the way.

Formulating Courseware Strategy on Common Knowledge Is a No-go

What’s the approach to formulating a strategy for video-based courseware effectively?

You hear tidbits on trends and “common knowledge” in the industry such as: “A training video can be only five minutes long,” or “Millennials watch training videos on smartphones, but everyone else watches on PC.”

Is such “common knowledge” really… knowledge? Where’s the data that supports these “facts”?

Folklore deserves healthy skepticism.

To plan and gauge our courseware effectively and optimize our customer’s learning experiences, we need firsthand, well-sourced data about how people actually interact with video.

I get such data from Ooyala, a resource that offers broadcasters and premium content providers (such as Vudu, Sky Sports UK, Star India) management tools that help them monetize video content. Ooyala tracks and analyzes the viewing behavior of more than 120,000 anonymized viewers in more than 100 countries, then publishes their findings quarterly. You can download Ooyala’s Global Video Index free and study it yourself.

Defying conventional wisdom, three surprising findings from Ooyala’s most recent report could help you optimize your Learning & Development efforts.

Video Trend #1: Longform Is In on Smartphone, Tablets and PCs

For three of the last five quarters, the majority of video watched online was longform – industry-speak for running times over 20 minutes.

  • Videos running 2-5 minutes account for only 38% of the time spent watching video on smartphones.
  • On tablets, longform accounts for 75% of all video time watched.
  • On PCs, viewers watch longform content to completion a whopping 71% of the time.
  • Viewers watch longform to completion on tablets 61.3% of the time.
  • Viewers watch longform to completion on phones 56.6% of the time.

The takeaway: While many factors determine how long your viewer sticks with you (to name a few: relevance, production quality, their reason for watching), the latest research directly contradicts the rote “knowledge” that viewers leave after a few minutes. Although the video offerings Ooyala measures mostly consist of entertainment, their data reveals that the majority of viewers will complete a 22-minute video if it’s interesting, regardless of subject material.

Questions to consider: How might using a longer format affect the way you subdivide your content? Can your content hold interest that long? Can you identify topics where learning and retention would benefit from not being shoe-horned into five minutes?

Video Trend #2: Mobile Video Is Mainstream Now

In Q1 of 2018, the number of videos viewed on mobile devices was up all over the world. For example, of all video plays in Asia-Pac, 60.7% occurred on mobile devices. EMEA and Latin America hit all-time highs for mobile’s share of video plays.

Mobile video views also rose to being the majority of views in every age demographic, everywhere.

The takeaway: Common knowledge held that mobile viewership was a niche for the young or for early adopters. Now, the majority of all video views occur on a tablet or phone. If you’re still developing courseware primarily for desktop PCs, you’re offering yesterday’s modality to an audience that’s rapidly leaving it. Consider whether your courseware developers should start thinking, “Mobile first.”

Video Trend #3: Streaming Is Overtaking Conventional TV

Sixty percent of all households that have a broadband Internet connection have at least one Streaming Video On Demand (SVOD) service (think Netflix, Hulu, HBO Now). The most rapidly growing segment is “households with four or more services.”

Content creators are scaling up massively to meet the anticipated need for content on demand. Top content providers processed three times as much content in Q1 2018 as they did in Q1 2017. This trend won’t abate as heavyweights such as Apple and Disney race smaller providers to launch new streaming services in 2019.

The takeaway: Consumer culture drives relentlessly toward “get what you want, when you want it.” In that context, how happy are your customers to wait weeks for your five-day training class to roll around again? Businesses that offer customers video training on demand will probably enjoy a growing advantage over competitors offering conventional courseware.

At Dell EMC Education Services, we are working tirelessly to develop an on-demand video learning platform so customers can choose traditional classes, instant video support, or a combination.  We’ve also begun adding interactivity so that viewers can click on a video table of contents, or click within a video to branch to a more in-depth related video. This is the near-term future of learning.

Summary

In times when what “everyone knows” about learning videos might be unfounded, finding a reliable source of data can improve your predictions and planning. Ooyala is not the only source, but it’s free, well-derived, and gives me a refreshing reality check against what I thought I knew. Check out the report for yourself. When it comes to customer behavior, timely trend-spotting can determine whether your training content lands with a thud or a whoop – and whether your fiscal year ends with an oops or a yay!

Please feel free to comment or share your insights with me below.

The post 3 Surprising Video Trends that Should Inform Your L&D Strategy appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/scott_pinzon/3-surprising-video-trends-that-should-inform-your-ld-strategy/feed/ 1
Windows 10 Migration: Best Practices for Making a User’s First Impression Great https://infocus.dellemc.com/colin_sainsbury/windows-10-migration-best-practices-for-making-a-users-first-impression-great/ https://infocus.dellemc.com/colin_sainsbury/windows-10-migration-best-practices-for-making-a-users-first-impression-great/#comments Mon, 08 Oct 2018 09:00:55 +0000 https://infocus.dellemc.com/?p=36328 First impressions count! This is the third in a series of blog posts looking at the enhancements and issues that our customers will experience when migrating data and applications to Windows 10, a process which is truly a transformation, a move to modern IT management. I’ll provide best practices to overcome respective challenges and also […]

The post Windows 10 Migration: Best Practices for Making a User’s First Impression Great appeared first on InFocus Blog | Dell EMC Services.

]]>
First impressions count!

This is the third in a series of blog posts looking at the enhancements and issues that our customers will experience when migrating data and applications to Windows 10, a process which is truly a transformation, a move to modern IT management. I’ll provide best practices to overcome respective challenges and also suggest a means to making a user’s first impression of Windows great.

The Very First Impression on the Windows 10 User: Shorter Boot up Time with SSD

Little things count with first impressions and there are good ones with Windows 10, beginning with the moment the user turns on his or her device. The speed of boot up time is remarkably shorter due to the dramatic improvements Microsoft has employed to make Windows 10 run fast with Solid-state Drive (SSD). Solid-state drives are also much more reliable, meaning less downtime due to failures.

The next set of favourable impressions rely on ensuring all the user’s data and applications have been migrated to the new device. This sounds simple, but in practice the latter is proving to be the more difficult challenge although there is much commonality between the two.

Migrating Data to Windows 10: EFSS and OD4B

Most organisations are moving to an Enterprise File Synch and Share (EFSS) solution such as OneDrive for Business (OD4B) and looking to use it as part of the migration process. In theory, EFSS makes it easy for the user and less work for the migration team. The user logs in on their new device, configures the synch client and the data starts to replicate. The difficulty arises from the volume of data that we each store today and this will define the time to complete the synchronisation.

All organisations have pockets of low bandwidth and even in well-connected offices, volume rollouts can put pressure on the links when the number of users simultaneously synchronising, reduces the available bandwidth.

OD4B addresses the low bandwidth issue by allowing users to partially synchronise their data with the local machine but they need to be educated to use this capability carefully.

Synchronised files are available offline but those that are yet to be synchronised can only be accessed when the user is online. Partial synchronisation offers the benefits of a reduced time to complete and less space used on the device but forces the user to connect to get a cloud-only file.

Migrating Applications to Windows 10: Configuration Manager

Applications are a tougher challenge to migrate and it will depend on whether you have chosen to Shift Left or Right as to how you address this. Broadly speaking, the challenges are the same, but the toolsets are different. Most of our customers are still using a tool such as Configuration Manager (ConfigMgr) to build their devices. I will talk about the differences seen with UEM toolsets such as Intune and Workspace One in the following section.

In most current systems, the device is builtby a Task Sequence and brought under ConfigMgr management. So far, so good, but how do we ensure we have all the user’s applications on the device ready for them?

To address this question, we first need to know:

  1. Which applications are in our estate today?
  2. Which of those applications are authorised to be in our estate?
  3. Of the authorised applications, which versions are Windows 10 compatible?
  4. Do we have a package containing our Windows 10 compatible version of each authorised application ready for distribution?

Answering these questions affirmatively means we now have a library of applications ready for our users, but one question remains, and this is often the challenge for customers – which application is used by whom?

Applications Installed Manually versus Technician or ConfigMgr

Whether applications are installed manually by a technician or an automated process using ConfigMgr collections, it is only possible to know that the job is done, if you truly understand what the task called for. Furthermore, it needs to be an accurate list provided ahead of the deployment rather than asking the user two days before the anticipated device handover. This is because the list of required applications needs to be cross-checked with the answer to Question 4 above.

If we have this list, the best way to make this work is to code the detail into ConfigMgr and use it to deploy applications to a specific device.

Whilst it is possible to target application deployment to the user, rather than the device, this would mean that the applications are deployed once the user logs on for the first time which is the type of experience we are hoping to avoid.

Targeting the device however, means we need to make the device user specific from the moment that it is first built. It is a workable approach when deploying in small quantities. For volume deployments, it is likely that the additional time for the deployment engineer/tech bar staff searching for the specific device for that user, will rival that of the user deploying their own applications.

As a result, many organisations will choose to go to line of business or departmental application level rather than full user specificity to find the best balance between cost and benefit. User specific applications can be self-installed using the Software Center component in ConfigMgr. This approach means that the user can get started immediately, whilst their specific applications are installed in the background.

Dell’s Connected Configuration Service: Making a User’s First Impression of Windows 10 Great

Dell offers our Connected Configuration Service to enable customers to extend their ConfigMgr environments via a VPN into our logistics chain. This means that devices can be built using your task sequence, join your domain and applications be deployed as they would be using an onsite build facility. Once prepared, the devices are re-boxed and delivered to their new owner, using our logistics team.

Figure 1: Dell’s Connected Configuration Service

The Shift Right Approach: Impact of Unified Endpoint Management

For those that have chosen the Shift Right approach, the applications set will now be delivered via their Mobile Device Management (MDM) toolset of which the two main players are Intune and Workspace One (formerly AirWatch). The industry is moving to the terminology of Unified Endpoint Management (UEM) to denote that the toolsets have matured to allow both mobile (smartphone) and PC management by the same toolset.

Regardless of the chosen tool, applications will be targeted to the device once the device enrolment process has completed and the device has been assigned a profile in the tool. In this case, the equation used for data can be rewritten as:

As many desktop applications are of significant size and some users need many applications, the time to complete can often be measured in hours. This is where aligning the application distribution approach and user persona becomes important. In How to Modernize Your PC Management Approach, I argued that UEM tool sets were best suited for those users with the lightest on device application requirement i.e. if they rely most heavily on Software as a Service (SaaS) or web applications, this is less of an issue.

Inevitably users will still require additional locally installed applications and we would prefer to preinstall those before the user gets the device to ensure the best first impression.  If the tool is only able to distribute applications after the enrolment process has completed, but we need to deploy applications to a device before the enrolment process starts – how can we break this log jam?

The answer lies in Dell’s ability to preconfigure devices before we ship them.

Dell’s Dynamic Imaging: Making a User’s First Impression of Windows 10 Even Greater

Dell offers our customers the ability to ship devices with a customer specific build preloaded onto them; a process we call Dynamic Imaging. Dynamic Imaging applies an image to the disk and injects into that image the driver pack for that device. This process enables us to support customers who want to maintain a single image for multiple hardware variants.

Using Dynamic Imaging, customers can include common applications that apply to all users for example security tooling, Office and PDF reader applications. In the past, customers made this image very application rich to minimise the impact of installing user applications on their network. However, the image became bloated and difficult to manage. We therefore guide our customers to keep this image as lean as possible.

So how is it that we meet our target of preinstalling user applications?

Here at Dell, we regularly talk about the Dell Technologies Advantage, which is where different brands within the family come together and the result of that collaboration is a real customer benefit. In this case, our Configuration Services team have worked with the Workspace One part of VMware to bring forward a solution to the application pre-provisioning problem.

Applications, or groups of applications, that are prepared for delivery via Workspace One can be exported to a PPKG file from the tool. The tool also provides an interface to build a Unattend.xml file to allow automated on premises (AD) domain join and enrolment with Workspace One. The combination of the PPKG and the Unattend.xml file are then transferred to Dell via a secure FTP service.

Figure 2: Dell Configurations Services (Workspace One)

A Dell Configuration Services technician then boots the device, applies the Windows 10 build, drivers and the PPKG file and places the unattend.xml file on the disk. The device is then placed back into its shipping carton and delivered to its new owner.  When they receive the device, they now only need to install any applications from the Workspace One application store that they use over and above those specified in the PPKG file.

For example, the PPKG file might be department specific, but they may require two applications that no one else in their department uses. These applications can be installed by the user. Importantly though, the user can do most of their work whilst those applications are provisioned.

Dell and Your Device Deployment

Dell has industry-leading Configuration Services which can give your users the best first impression when they receive their new Windows 10 device, whether that device be delivered using ConfigMgr or UEM tools.

When these Configuration Services are combined with VMware Workspace One, the Dell Technology Advantage provides the best solution in the marketplace today to support the needs of your ultramobile users.

Figure 3: The Winning Combination – Configuration Services + Workspace One + Dell Technology Advantage

If this post has helped you formulate your best route to Windows 10, or if you have more questions, I would love to hear from you.

You may be interested in these other blogs:

Windows 10 Migration: Should You Shift Left, or Right?
How to Modernize Your PC Management Approach

The post Windows 10 Migration: Best Practices for Making a User’s First Impression Great appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/colin_sainsbury/windows-10-migration-best-practices-for-making-a-users-first-impression-great/feed/ 1
Every Day is #CX Day https://infocus.dellemc.com/doug_schmitt/every-day-is-cx-day-digital-transformation/ https://infocus.dellemc.com/doug_schmitt/every-day-is-cx-day-digital-transformation/#respond Mon, 01 Oct 2018 13:44:16 +0000 https://infocus.dellemc.com/?p=36510 In services, as in every industry, delivering the best possible customer experience is what drives digital transformation. How do you celebrate CX Day? As you may already know, CX stands for customer experience – and CX Day is a global, industry-wide celebration of the companies and individuals that create exceptional customer experience. At Dell Technologies, […]

The post Every Day is #CX Day appeared first on InFocus Blog | Dell EMC Services.

]]>
In services, as in every industry, delivering the best possible customer experience is what drives digital transformation.

How do you celebrate CX Day?

As you may already know, CX stands for customer experience – and CX Day is a global, industry-wide celebration of the companies and individuals that create exceptional customer experience. At Dell Technologies, we will celebrate our customer relationships and recognize the team members who make great customer experience happen.

It also reflects the growing importance of CX in our digital world, where, by 2020 it is predicted that:

  • The quality of customer experience is expected to overtake price and product as the primary brand differentiator
  • Every $1 invested in customer experience will deliver $3 in ROI

Celebrating the Everyday Heroes

For Dell Technologies, 2018 will be our 5th annual CX Day with 80 celebration events for employees in more than 20 countries across six continents. It’s an opportunity to thank and recognize our global team members, who are the heart and soul of delivering our best-in-class customer experience.

So, it may surprise you to hear that, this year, I’ll be somewhere else.

That’s because, quite appropriately, CX Day coincides with the date of our annual Dell EMC Service Partner Forum. The services team and I will be meeting with 350 of our top support and deployment services partners, who are a critical part of delivering the best possible experience to our customers around the world.

The forum recognizes the contribution of service partner companies that provide everything from tech support, to channel services, to parts, logistics, training and field service to our customers. We also conduct working sessions to advance joint roadmaps for improving the Dell EMC service experience.

Improving CX through Digital Transformation

Today, no matter what business you’re in, the pressure is on not just to enable but to accelerate “digital transformation.” The goal is to be able to move beyond traditional product- and market-focused strategies and focus efforts on the quality of the customer experience with capabilities such as real-time customer engagement, actionable insights, and personalization.

The services business is no different. The objective of delivering the best possible customer experience―to every customer, every time―is driving how we design and deliver service and how we collaborate with customers to achieve their objectives.

That’s why “Digital Transformation of Services” is a key theme at our partner forum―and for our team of 60,000+ Dell EMC Service and partner professionals worldwide.

Just as enterprises of all kinds are striving to put advances in data science, artificial intelligence (AI), business intelligence (BI), virtual and augmented reality (VR/AR), machine learning (ML) and deep learning (DL) to work for competitive advantage, we are working to put these same technologies to work to deliver the next generation of services our customers need to succeed.

Proactive, Predictive, Personalized

By utilizing our breakthrough automated, predictive and proactive support capabilities, such as those powered by connected technologies that continually monitor system state, we’re able to identify and resolve issues much faster. Over the years we’ve continued to evolve these capabilities to increase our ability to predict and proactively address impending problems before they even have a chance to occur and impact customers.

Notably, today we are the only IT service provider that gives customers a consistent, proactive, predictive and data-driven support experience across their entire environment―from PC to data center. For example, with ProSupport Plus and SupportAssist customers can experience up to 92% less time to resolve a failed hard drive issue and up to 72% less IT effort to resolve server issues. And we’ve extended this experience from commercial customers to consumers.

Recently, we’ve been working to provide even more personalized and effortless support that leverage data science, machine learning and AI-empowered technology to deepen our relationship with customers. For example, we’ve developed personalized dashboards that provide enterprises with near-real-time and historical data about their services history, with the ability to visualize trends or drill down to specific technologies, service calls, and timeframes.

The result has been greater collaboration for better and more efficient service delivery, as well as insights into how customers can improve internal IT processes and skillsets to optimize the end user experience and help shift IT focus from maintenance to innovation.

CX is Job #1

At Dell Technologies, we understand that our customer relationships are the ultimate differentiator and the foundation for our success.

Today our support and deployment services have a 94%+ customer satisfaction rating. But we’re not satisfied.

Together with our partners, our global services organization of 60,000+ subject matter experts, consultants, project managers and engineers is committed to continue to expand and evolve our services to make it easier to put digital technologies to work―from selection, to consumption, to adoption, optimization and support―from the edge to the datacenter to the cloud.

So, our customers can accelerate their own digital transformation and in turn, provide their customers with a CX worth celebrating.

The post Every Day is #CX Day appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/doug_schmitt/every-day-is-cx-day-digital-transformation/feed/ 0
Why the Workforce Needs to Change for Digital Transformation https://infocus.dellemc.com/tim_wright/why-the-workforce-needs-to-change-for-digital-transformation/ https://infocus.dellemc.com/tim_wright/why-the-workforce-needs-to-change-for-digital-transformation/#respond Mon, 01 Oct 2018 09:00:21 +0000 https://infocus.dellemc.com/?p=36178 Digital transformation is impacting today’s workforce and let me underscore, effective workforce transformation – transformation readiness among employees – is critical to successful digital transformation. Digital-first: Transformation, Technology and Readiness Digital transformation and digital readiness have become catchphrases in their own right. The meaning of the terms represent benefits for everyone – including those not […]

The post Why the Workforce Needs to Change for Digital Transformation appeared first on InFocus Blog | Dell EMC Services.

]]>
Digital transformation is impacting today’s workforce and let me underscore, effective workforce transformation – transformation readiness among employees – is critical to successful digital transformation.

Digital-first: Transformation, Technology and Readiness

Digital transformation and digital readiness have become catchphrases in their own right. The meaning of the terms represent benefits for everyone – including those not engaged in an IT profession – and a call to embrace a common understanding of modern technologies as we hurtle towards a world of increasing disruption. More devices are connecting more people in effective and collaborative ways.


To define digital transformation as simply “the application of digital technology to impact all aspects of business” is to shortchange its true meaning. Digital transformation is also the resultant change in how people do their work, make decisions, solve problems and achieve results. Ultimately, then, an individual’s transformation readiness contributes to organizational readiness and is linked to improved business outcomes.

Look at it this way:

Digital transformation causes tremendous changes, advances, and breakthroughs across businesses globally. It also causes an immediate and increasing need for the digital readiness.

Digital technology is the tools and processes with which people have to work. In terms of workforce solutions, we at Dell EMC like to say, the right technology in the right people’s hands allows them to work without limits.

Digital readiness is the transformation in their thoughts, perceptions and approaches as to how they work with the digital technology.

The most exciting definition I’ve come across for digital mindset in terms of readiness is from Shahana Chattopadhyay in her article 7 Characteristics of a Digital Mindset:

A digital mindset comprises a set of behavioral and attitudinal approaches that enable individuals and organizations to see the possibilities of the digital era, to use its affordances for deeper personal and greater professional fulfillment, and to design workplaces that are more human-centered, purpose-driven and connected. An individual with a digital mindset understands the power of technology to democratize, scale and speed up every form of interaction and action. Having a digital mindset is the ability to grasp this spectrum of impact of the Network Era, and the capabilities and attitudes required to face it with equanimity.

Agreed!

It is obvious that the technological transformation without the readiness transformation achieves less than its potential. Building readiness is as important as installing and integrating the technology components. A business cannot digitally transform unless — or until — its people transform.

Building transformation readiness is part and parcel of building the digital culture. A digital culture is replete with the technology and the readiness and the integrated applications of both on a continuous basis. A digital culture identifies with its digital technology. A digital culture thinks and talks and walks the progressive connection between people and technology.

Respectively, Michael Dell cites in Realizing 2030: A Divided Vision of the Future:

We’re entering the next era of human-machine partnership, a more integrated, personal relationship with technology that has the power to amplify exponentially the creativity, inspiration, intelligence and curiosity of the human spirit.

To build the digital culture throughout an organization in this next era requires a two-part strategy: communication and engagement. Both require careful planning and intricate implementation.

Let’s examine them one at a time.

Speaking the Language

The sooner and the more people speak the language that reflects the new culture, the sooner and the more completely the culture is realized. This does not mean merely throwing around the buzzwords and catch phrases that advertise digital transformation.

It means using the language that expresses the culture: its components, its processes, its benefits, its values, it constraints. Certainly, using the language includes explaining it at every point where it is necessary to insure that every person understands. Explaining it with the intention that everyone grasp the culture requires taking one’s time to communicate clearly and completely.

Approach the communication with both attention to message (what to say) and to messenger (who will say it) and to frequency (how many times to say it).

The message may best be developed by asking and answering (finding the answers to) questions. A well-written blog post by Jim Reznicek and titled Preparing Your Workforce for a Digital Transformation appeared on the Jabil blog in March 2018. The recommendation is that these specific questions be addressed with employees:

  • What is digital transformation?
  • Why is our company undergoing a digital transformation? What are the new technologies that will be introduced to our daily work?
  • What impact will the digital transformation have on our employees?
  • What is the timeline for the digital transformation?
  • How will the company prepare employees for upcoming changes?

To those above, I would add this question: what will such transformation enable me to do better than I do it today?

Communication surrounding these questions can be presented in a number of ways. First, it is essential that the business’s leadership team has an active role in communicating from their perspective the how’s and why’s and when’s of digital transformation happening to the business. Members of the business want hear the CEO’s answers to such questions. Then they want to compare them to the answers from CFO, COO, CIO, CCO…all the way to their immediate managers and team leaders. It is almost impossible for employees to hear too much about the full meaning of becoming a true digital culture.

In today’s intensely competitive global business arena, every edge is critical—and for most companies, there is no greater edge than a talented, motivated, and creative workforce.[1]

As much as employees want messages from leaders, they also want frequent and structured opportunities to share their own understanding, viewpoints, and possibilities regarding “all things digital.” Building a culture requires as much talking as listening.

Engaging the Players

The opportunity to discuss what’s going on regarding the business’s digital transformation is a critical form of engaging the people in developing the new culture. Consider three additional approaches to engaging employees to enhance their digital mindset.

  • Learning. One primary purpose of digital transformation is to remove routine, predictable, pattern-finding tasks from the human assignment. That means that people will be – or be expected to be – engaged in interactions with other people, in design thinking, in creative production. That will require learning opportunities in Agile/Scrum methodology, design thinking, collaboration skills.
  • Advances in AI from if-then-when algorithms to machine learning, significantly alter the learning experiences in which people can engage. Individuals can effectively be put in control of their learning as the digital technology provides ways to strengthen digital readiness. Experiential learning platforms offering blended, complementary information and education allow employees to experience digital technology working for them.
  • The tools and platforms with which information, education, learning experiences are exposed to the individual are increasing in novelty, number and effectiveness. Consider online/on demand, mobile, live streaming, learner-produced videos, AR/VR…as you recall instructor-led classroom training. The more a business uses the variety of exposures, the more thoroughly they build the digital technology a mindset a culture success.

Summary: 4 Tips for ‘Going Digital’ Effectively

  1. Digital technology should be accompanied by a true digital mindset among team members.
  2. An embedded and comprehensive digital readiness generates and reinforces a true digital culture.
  3. A well-designed and implemented communication strategy enables all members of the business to talk the digital talk that strengthens the digital culture.
  4. Engaging everyone in the business in learning, experiencing and enjoying exposure to the many ways digital makes a difference is the other half of the digital culture strategy.

The unrelenting pace of digital and workforce transformation are creating new challenges for all of us. The Dell EMC Education Services Team is focused on enabling customer success by expanding our education and certification portfolios for today’s market. If you have any questions or would like to learn more about Dell EMC Education Service’s training and certifications, contact your Dell EMC representative or comment below and I’d be happy to respond.

[1] In Dell’s research Unleash the Creative Force of Today’s Workers, it found 20% of workers are satisfied with their technology and 42% of Millennials are likely to quit a job because of substandard technology.

Sources:

The Growing Demand for AR/VR in the Workplace

Redefine Your Workforce Enablement through Productivity

Unleash the Creative Force of Today’s Workers

 

The post Why the Workforce Needs to Change for Digital Transformation appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/tim_wright/why-the-workforce-needs-to-change-for-digital-transformation/feed/ 0
IT Transformation Maturity: an APJC Perspective https://infocus.dellemc.com/frederic_dussart/the-state-of-it-transformation-where-are-you-now/ https://infocus.dellemc.com/frederic_dussart/the-state-of-it-transformation-where-are-you-now/#respond Tue, 25 Sep 2018 05:00:25 +0000 https://infocus.dellemc.com/?p=36468 A new ESG Research Insights Paper provides a fascinating update on the state of IT Transformation Maturity across industries and around the globe. Compared to research conducted one year ago, it’s clear that IT organizations are making progress. The rise is most evident in the lowest stage of IT maturity, with the proportion of organizations […]

The post IT Transformation Maturity: an APJC Perspective appeared first on InFocus Blog | Dell EMC Services.

]]>
A new ESG Research Insights Paper provides a fascinating update on the state of IT Transformation Maturity across industries and around the globe.

Compared to research conducted one year ago, it’s clear that IT organizations are making progress. The rise is most evident in the lowest stage of IT maturity, with the proportion of organizations ranked as “Legacy” shrinking from 12% to 6%.

The research also broke down results by geography and industry. For example, of the 4,000+ IT decision makers surveyed, 1,374 are from six APJ countries. While organizations face the same goals and challenges (e.g.,79% of APJ respondents say transformation is important for business success, compared to 82% worldwide and 94% in APJ report transformation initiatives underway, compared to 96% worldwide), there are some interesting nuances.

Organizations in APJ are ahead of their counterparts in modern technology adoption, more likely to have moved to hyper-converged, software-defined infrastructure and enabled self-service IaaS (13% vs 9% worldwide). But APJ statistics also show a broader disparity in IT Transformation Maturity―with more companies having achieved Stage 4 “Transformed” status (8% vs 5% worldwide), but also more organizations stuck in Stage 1 “Legacy” (10% versus 4% worldwide).

IT Transformation Distribution - The Maturity Curve

Figure 1: IT Transformation Distribution – The Maturity Curve

Familiar Obstacles

Globally, the proportion of fully “Transformed” IT organizations has only grown by 1%.

Why?

In our experience, while every business is different, we do find that organizations run into similar and familiar obstacles—across geographies and across industries.

A bank we’ve begun working with in EMEA, for example, recognizes that speeding the delivery of innovative and quality digital services is critical to competing with new players, as well as other banks. But progress toward that objective has been halting and slow.

What’s the Hold Up?

IT leaders understand that they need modern, software-defined infrastructure for multi-cloud flexibility, cloud native application, and DevOps capabilities to improve quality, innovation, and time-to-market.

In many organizations, however, the focus has been almost exclusively on the infrastructure aspects of “moving to the cloud.” The application, organizational, skillset, and process changes necessary to put cloud to work have been largely ignored or treated as a low priority.

By now, I think, we’ve all heard the adage that a successful IT transformation must encompass “people, process and technology.” 

So why isn’t it happening?

It’s Not Easy!

Few organizations have the luxury of a greenfield deployment and must figure out how to continue to operate legacy applications and infrastructure—maintaining security, availability, and so on—while reducing technical debt and moving to new kinds of application architectures and development practices. Progress requires understanding complex interdependencies and synchronizing changes across multiple domains.

While a modern software-defined infrastructure provides the foundation for the efficiency and agility that digital business demands, it is not enough.

The biggest stumbling block for most organizations is people and process. In place of traditional silos of teams, tools, and processes responsible for managing specific technologies, new roles and skillsets must be defined and developed for end-to-end services delivery.

This challenge is bigger than providing self-service portals and service catalogs for IaaS, PaaS, and so on. It requires people to acquire new kinds of “soft skills” for working closely with businesses to determine and anticipate new needs, for leading agile scrums, for creating and promoting new services, and monitoring the quality of services.

Given the difficulties, it’s not surprising that many initiatives get stuck in an endless planning phase—or fall apart into fragmented, disconnected projects.

Agile Means Leveraging What’s Already Been Done

The good news is that enterprises don’t have to start from scratch. IT transformation programs can build on the experience, solutions and services of others. Like the bank, more and more of the enterprises we work with are asking us not just for technical expertise and support, but for help with IT Transformation.

Over the past 15+ years, Dell EMC has developed and refined methodologies and unique IP and tools for helping organizations develop a holistic top-down and bottom-up IT Transformation strategy.

We offer proven and pragmatic ways that enterprises can accelerate building their business case, keep application, infrastructure, and operating model initiatives aligned and in sync, and sustain IT transformation program momentum over time.

Where Are You?

To thrive in a digital economy fueled by smart, connected devices, personalized services, and data-driven insights, businesses need the speed, agility, efficiency, scale, and cost-effectiveness enabled by IT Transformation.

Figure 2: IT Transformation Outcomes – The Link between IT Transformation and Business Value Is Clear.

Progress begins with an objective understanding of where you stand today. An interactive online assessment tool based on the latest IT Transformation Maturity research data can help by providing a benchmark to your peers in both geography and industry, and customized recommendations and a blueprint action plan you can use to accelerate your IT Transformation.

The post IT Transformation Maturity: an APJC Perspective appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/frederic_dussart/the-state-of-it-transformation-where-are-you-now/feed/ 0
Moving to Multi-cloud: Roadmap Considerations https://infocus.dellemc.com/norman-dee/moving-to-multi-cloud-roadmap-considerations/ https://infocus.dellemc.com/norman-dee/moving-to-multi-cloud-roadmap-considerations/#respond Mon, 24 Sep 2018 09:00:09 +0000 https://infocus.dellemc.com/?p=36247 This blog is part of the Moving to Multi-Cloud series, which gives practical advice on how to move your multi-cloud strategy forward. Not All Roadmaps Are Equal Many organizations are busy working on the move to multi-cloud, but all too often their initiatives are disconnected and scattered, with application developers building in one set of […]

The post Moving to Multi-cloud: Roadmap Considerations appeared first on InFocus Blog | Dell EMC Services.

]]>
This blog is part of the Moving to Multi-Cloud series, which gives practical advice on how to move your multi-cloud strategy forward.

Not All Roadmaps Are Equal

Many organizations are busy working on the move to multi-cloud, but all too often their initiatives are disconnected and scattered, with application developers building in one set of clouds and infrastructure teams building up their multi-cloud capabilities without sufficient alignment with the application teams.

These organizations typically start with an infrastructure roadmap focused on getting their new technology in place. The impact to the operating model and service delivery is often an afterthought. When it comes to applications, many organizations have a “cloud first” approach looking to build or update their applications in public clouds without regard to data compliance, inter-dependencies or real cost. They have expectations of increased availability, scalability, speed and flexibility at lower costs.

To build an effective roadmap, you need to understand and integrate infrastructure, operating model and application activities. There are major dependencies between these areas that impact the success of the transformation.

Infrastructure Considerations

The first step in determining infrastructure roadmap initiatives is to understand your current state. Key dimensions to consider include:

  • Services: Is there a service catalog? Are the services standardized? Is there a self-service capability? Do you offer both infrastructure and cloud-native services?
  • Inventory: How accurate is the CMDB? Is it aligned with the services IT provides?
  • Asset alignment: Is asset information (for example costs, contracts, service lifecycle) linked to the CMDB?
  • Stack: What software components (e.g., VMware, container-based, OpenStack) and approaches are being used to deliver services?
  • Security: What kind of ID access management and data protection is there? Are the rules and processes sufficient and consistent for a multi-cloud scenario?
  • Funding model: Is the budget project-based or IT-owned? Is there a showback/ chargeback capability? How will it be impacted by a multi-cloud implementation?  Will the rates be platform or service specific?
  • Physical environment: How many data centers to you have? What are the sizings? Actual peak utilization? What are the various technologies? Is there virtualization? What is the current public cloud usage?
  • Monitoring/Performance: What kind of monitoring is there? Real-time monitoring, alert management, business alignment, or escalation management? How is performance measured and reported? Is there a performance database, dashboards, or SLA reports? Will there be a single pane of glass for both internal and external systems?

You also need to define your target state, which will drive your data center strategy. Depending on the application strategy, the target data center will not to be as big if you’re moving some applications to the cloud. Think about the effects of increased virtualization, software defined infrastructure (SDDI), flash, and converged/hyper-converged infrastructure. Also consider services-based architectures (IaaS, PaaS), and resiliency strategies. Think about having a service catalog that offers self-service provisioning of these services.

Operating Model Considerations

As you build out the roadmap, current roles and processes will need to evolve to be more in line with the services and development methodologies. Is there a true service management operating model? Is the IT operations team integrated with the application operations teams? There may need to be new IT service-based roles and processes, as well as new DevOps roles and processes.

How closely is IT working with the business units and with developers to understand and manage both IT and cloud-native service demand? Business Relationship Managers (for aligning business needs with IT services), Service Portfolio Managers (for managing the catalog and services life cycle), and Demand Managers (for structuring the demand) roles may have to be created.

Do you have a capacity management process? Is it siloed, integrated, or federated? Is it linked to demand, organic and new business trending? Are there dashboards indicating effective capacity utilization, predictive needs? What do you need to do to be able to right-size your operations to ensure resources for current and future services?

Consider a financial management strategy for transparent service-based pricing and billing. This will allow you to recover costs for services based on usage as well as demonstrate market-competitive pricing.

Application Considerations

Your applications and infrastructure strategies need to be coordinated between the Application Development teams and IT. Rather than just shifting virtual machines to the new infrastructure, think about determining disposition of current applications:  determine which should be migrated to the new environment, which should be moved to public cloud, which should be replaced with SaaS applications, and which should be retired.  Prioritize applications moving to the new infrastructure, making sure that the progress of infrastructure capabilities closely matches the technical needs and business priority of particular applications.

In addition to applications, there are many, many development tool stacks available; typically every application development group has different combinations of tools and integration environments.  These often require infrastructure resource APIs for the rapid provisioning and decommissioning of developer needs.  A partnership between the app dev teams and IT where both agree on the development stack, or a standard API that IT will provide, is essential. This will enable the application teams to simply request the environment they need, freeing them to focus on their real work – developing applications. This may mean reviewing the current portfolio of tools, understanding where there are opportunities for rationalization, or building a standard API for infrastructure resources. All of this is dependent on the size, culture and diversity of the lines of business IT supports.

Focus on Outcomes

Rather than trying to address many requirements and features with a large, long-term development project, developers often focus on an agile approach, quickly delivering just a functional piece of what’s needed, or a minimally viable product (MVP). They then adjust development of the next MVP based on feedback from the first MVP.

This is an effective strategy for transformation roadmaps as well. Align activities and deliverables across infrastructure, operating model and applications. Think about overall transformation outcomes, not just those of each transformation area. For example, think about the minimum infrastructure services that are needed by application owners and software developers which can be built in a few months, versus trying to create the perfect set of services that may take 1-2 years. Similarly, for operating model, focus on a small set of processes and roles that can quickly be aligned to a more agile cloud operating model instead of trying to overhaul all processes at once.

Set a short timeframe for the first MVP milestone, ideally at 4-6 months.  Each MVP milestone should have metrics and deliverables for specific outcomes. The idea is that you’re trying to get beyond programs that go for years before they deliver value, you want to be able to show value at each stage of the transformation. The outcomes for each MVP stage should be carefully monitored, and the following roadmap stages modified based on the feedback.

Don’t Forget About Governance!

Successful transformations include a dedicated governance track, with regular reviews of program progress. Consider including a Transformation Program Office (TPO) integrated within your IT organization to provide overall transformation oversight. The TPO, often composed of a high level project manager and enterprise architect, operates as the single point of accountability for, and management of, schedule, resources, scope, budget, issues, and risks.

Consider incorporating a steering committee of key stakeholders and decision makers whose input and support will ensure business alignment and executive sponsorship. Think about integrating a change management program to ensure smooth business operations during the transformation.

A communications program to promote success at various stages of the transformation, including metrics where you can, is critical. This helps everyone understand what’s going on while building confidence and support in the program.

Summary

By integrating infrastructure, operating model and application activities, establishing metrics and identifying outcomes, as well as defining the overall governance needed to successfully manage the transformation, you will be able to build a highly prescriptive and effective transformation roadmap.

Blog in the Series

Moving to Multi-cloud: How to Get Stakeholders Aligned (Part I)

 

The post Moving to Multi-cloud: Roadmap Considerations appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/norman-dee/moving-to-multi-cloud-roadmap-considerations/feed/ 0
Demystifying Software-defined Networks Part VI: SD-WAN Adoption Accelerates as Platforms Mature https://infocus.dellemc.com/javier_guillermo/demystifying-software-defined-networks-part-vi-sd-wan-adoption-accelerates-as-platforms-mature/ https://infocus.dellemc.com/javier_guillermo/demystifying-software-defined-networks-part-vi-sd-wan-adoption-accelerates-as-platforms-mature/#respond Wed, 19 Sep 2018 18:30:12 +0000 https://infocus.dellemc.com/?p=36230 It’s been a few years since the promising new future technology called SD-WAN came on the scene, so is this related to SDN or a new concept? SD-WAN stands for Software-defined Networking in a Wide Area Network. It shares key pillar concepts of SDN like separating the control plane from the data plane and the […]

The post Demystifying Software-defined Networks Part VI: SD-WAN Adoption Accelerates as Platforms Mature appeared first on InFocus Blog | Dell EMC Services.

]]>
It’s been a few years since the promising new future technology called SD-WAN came on the scene, so is this related to SDN or a new concept?

SD-WAN stands for Software-defined Networking in a Wide Area Network. It shares key pillar concepts of SDN like separating the control plane from the data plane and the centralized control of the network via the SDN controller. They both allow the enablement of automation and orchestration of network devices. So, what’s the difference then? It’s like a forest from the trees expression as SDN has multiple use cases: Application Delivery Networks, Central Policy Control, Terminal Access Point Aggregation (TAP), Data Center Optimization, Virtual Core and Aggregation, SD-WAN, etc.

SD-WAN is an application (one of the many applications) of SDN technology with a focus on Wide Area Networks, allowing companies to build higher performance WANs using lower-cost internet access technologies.

SDN

Figure 1: SDN Use Cases.

The Benefits of SD-WAN

SD-WAN was designed with the idea of solving challenges like optimizing network connectivity between conventional branch offices and data centers and MPLS (Multi-Protocol Label Switching), deploying or modifying existing services in a much faster and efficient way, network congestion, packet loss, jitter, latency, etc. The reality is that “old traffic flows” are not designed for the explosion of traffic and bandwidth due to the success of cloud computing and on demand multimedia applications (think of live music, video streaming, etc.). The other major issue is not a technical one but about Operational Cost (OPEX). T1 or MPLS circuits are expensive, the former may have better point to point performance, but it is static, while the latter is highly configurable. SD-WAN technologies are aiming to bring the cost per Mbyte down by at least 60%, according to latest estimates.

Another added benefit of SD-WAN is that will work over a variety of media (for example you can also use a wireless connection), allowing service chaining, policy based centralized control, application intelligence, automation, flexibility and elasticity, etc. The reason why Internet connections weren’t used for enterprise WAN services was that the internet was always a conglomerate of different technologies best effort networks. Simply put: It wasn’t reliable or secure enough for most corporate needs. SD-WAN was designed to change all of that.

Some of you may be thinking, yes, all of that sounds fantastic Javier, but like with other implementations of SDN, you will need to make a huge investment as most solutions consist on both a central controller (often hosted in the cloud) and access nodes on-premises that support the technology, meaning you will have to throw away a lot of old equipment and make a big investment in new premises equipment, right? And how about what you mentioned at the beginning about SD-WAN being mainstream already, aren’t we really years from that?

Yes and no.

Leaders in the SD-WAN Space

Remember the blogs we wrote about the three different kinds of SDN (Open, APIs and Overlays)? While Open SDN would require a higher CAPEX investment but will bring additional innovations and advantages, SDN over overlays and SDN over APIs will be ideal for brown field development and reuse of legacy equipment. To help make SD-WAN a reality for companies, two of the leaders in this area: Cisco and VMware have made some bold moves.

Cisco bought Viptela for $610 Million and it is going to make its SD-WAN technology available not only on all ISR and ASR routers but will also on ENCS 5000 routers that are around 4 years old. That will mean in practical terms, that Cisco will push SD-WAN in over 1,000,000 routers in a question of weeks, the most massive mainstream implementation of this technology. This is great, right? Not if you’re a customer that has spent years trying to uncouple themselves from vendor lock in.  One of the key benefits for SDN implementation was to avoid closed systems, utilize inexpensive white boxes instead, avoiding vendor hegemony and lock-in again.

Cisco, like most networking manufacturers, want to keep their hardware hegemony as long as possible, for obvious reasons, and they are not shy about touting the advantages of one end-to-end Cisco SD-WAN solution.

Figure 2: Why Cisco SD-Branch is Better than a ‘White Box.’

The other leader in this space, VMware, also recently purchased (November 2017) a leader in SD-WAN technology: VeloCloud, for an estimated $449 million (according to Futuriom). Although VeloCloud offers multiple x86 appliances options with the software preloaded, it was designed to run on any x86 multi core hardware and offer some additional features like active network performance measurement (BFD), Forward error control and comes on several flavors (Premises or Cloud for Viptela and Internet, Hybrid SD-WAN or Premises for VeloCloud). Both Viptela and VeloCloud work as an overlay, support zero touch provisioning, have North bound REST and support Policy provisioning via the controller.

Although VMWare has a full SDN-NFV ecosystem with its NFV3.0 (including VIM, SDN Controller NSX, vRO for Orchestration, etc.), it is not trying to force customers into a monolithic approach.  In fact, VMware is even allowing a closer integration with Openstack thanks to VIO (VMware Integrated Openstack) and VeloCloud also works with a non-VMware ecosystem as well.

Customers will have to weigh the pros and cons of a closed system versus a vendor independent approach. If Cisco’s bet on the closed system pays off, they will be bringing back the vendor lock-in approach of the 90s, having an all end to-end-Silo from the hardware at the bottom, to the NFVI, VNFs and Orchestration.

Figure 3 – Vendor hegemony Trojan Horse? (Source: Martin Kozlowski)

Summary

SD-WAN is becoming completely mainstream but the old discussion of having open multivendor systems where the customer chooses the best for their needs versus a single vendor silo seem to be making a comeback. In total fairness, every option has pros and cons, one silo of a company could theoretically provide better end-to-end support and seamless integration between different components. On the other hand, open multivendor systems will increase innovation speed, customer freedom and speed of adoption.

Part of the SDN Blog Series

Demystifying Software-defined Networks Part V: A Decade Later, Where Are We Now? (Part II)

Demystifying Software-defined Networks Part IV: A Decade Later, Where Are We Now?

Demystifying Software-defined Networks Part III: SDN via Overlays

Demystifying Software-defined Networks Part II: SDN via APIs

Demystifying Software-defined Networks Part I: Open SDN Approach

Sources

Why Cisco SD-Branch is better than a ‘white box’

www.futuriom

www.sdxcentral.com

 

The post Demystifying Software-defined Networks Part VI: SD-WAN Adoption Accelerates as Platforms Mature appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/javier_guillermo/demystifying-software-defined-networks-part-vi-sd-wan-adoption-accelerates-as-platforms-mature/feed/ 0
Leveraging Lean IT for Service Delivery Optimization https://infocus.dellemc.com/gabriel_lopez/leveraging-lean-it-for-service-delivery-optimization/ https://infocus.dellemc.com/gabriel_lopez/leveraging-lean-it-for-service-delivery-optimization/#respond Tue, 18 Sep 2018 09:00:22 +0000 https://infocus.dellemc.com/?p=36264 As a Managed Services organization, Dell EMC takes the quality of the services we deliver to our customers very seriously. Over the years we have developed best-in-class service delivery practices for the technologies we support, not only for our own products, but also for a variety of technologies available in the market today. Our approach […]

The post Leveraging Lean IT for Service Delivery Optimization appeared first on InFocus Blog | Dell EMC Services.

]]>
As a Managed Services organization, Dell EMC takes the quality of the services we deliver to our customers very seriously. Over the years we have developed best-in-class service delivery practices for the technologies we support, not only for our own products, but also for a variety of technologies available in the market today.

Our approach to service delivery includes a very strong and skilled set of Professional Service Managers who are tasked with influencing our operations across the globe in the utilization of Dell EMC Best Practices for achieving proactive support and sustaining our internal Continual Service Improvement cycle.

In a previous blog I’ve discussed the challenge of how to get Better, Faster and Cheaper IT services, particularly around Cloud Computing. The challenge is real!

Continuous Improvement

Whether you are part of an external Service Delivery organization like ours, or a member of an internal Service Delivery team, I’m sure we share the same challenge. We aim to deliver best-in-class services to our customers around the globe and across a variety of different industries. We are tasked with maximizing delivery operations with a healthy contribution to our bottom line by making continuous improvement and reducing service cost to enable sustainable growth. Our customers, as well as your internal customers, are demanding more for less!

Within the highly competitive environment our organizations are facing every day, business disruptions are simply not tolerated. Our customers and their businesses have developed a high level of dependency for the technologies we support. Cutting cost by reducing service levels is simply not enough. Compromising service quality could be a very short term policy but won’t pay over time.

Reaching our target requires a delicate balancing act of managing cost, efficiency and capacity of the activities and processes we execute every day in our service delivery. This delicate balancing act, should be performed in such a way that service quality is not compromised and the delivery of value to our customers is maintained and increased over time.

Extreme Cost Optimization

According to the Technology Services Industry Association (TSIA) in a paper by Thomas E. Lah published in February 2018, one of the top seven key industry trends is what he calls “Extreme Cost Optimization.” It’s a clear trend for service providers, driven by customers, so it easily translates to every IT Service Delivery organization. In his paper, Lah states that in order to pursue avenues of cost reduction, two of the levers being pulled hard are “extreme service automation” and “reduced workforces.” He also argues that “support and field services organizations are reaching the limits of these levers.”

So, how can we achieve this “Extreme Cost Optimization” without impacting quality and value to our customers? How can we know just how much it’s costing us to execute processes such as virtual machines (VM) or storage provisioning? How do we measure the efficiency of these processes? What is quality and how do we know how much quality is enough so we don’t overdo it? How can we control and measure process capacity?

We do it by adopting Lean IT, minimizing waste and maximizing value, and innovating rethinking the ways we conduct business, and changing our mindset and our leadership skills to turn “problems” into positive learning experiences.

Lean IT

The Lean concept is not new. Back in the 1940’s, Toyota needed to reduce the amount of raw materials used in the production of their vehicles, and the time incurred between when the raw materials were acquired and the cars were invoiced to customers. That’s how the Toyota Production System or TPS was born. It was rooted around efficiency and quality, where every link of the production chain was committed to the quality of the activities performed, resulting in a valuable delivery to the end Customer.

The Lean concept is relatively new in its application to IT Services: it’s referred to as Lean IT.

It’s well known that “what is not defined can’t be controlled, what is not controlled can’t be measured and what is not measured can’t be improved.”

Figure 1: Lean IT Methodology

As part of our innovation and continual improvement efforts, Global Service Quality (GSQ) incorporated the utilization of Lean IT to its Service Improvement Practice for several years, and recently developed a methodology that enables our organization to measure process efficiency, process cost and process capacity for the very first time. Our new enhanced methodology also allows us to define what’s Critical to Customers (CTC) and critical to Quality (CTQ) by listening to the voice of the Customer and the voice of the Process.

This new methodology also allows us to run different scenarios and analyze possible impacts on efficiency, cost and capacity for different support structures. It’ll help us by creating models that are centered on the customer, thereby providing understanding of the impact of changes before they are implemented, benefiting strategic decisions, and increasing our overall delivery of value.

Dell EMC’s IMS Value Stream Mapping Methodology

Earlier this year, the Dell EMC IMS Value Stream Mapping Methodology has been piloted with one of our Competency Groups in our Draper CoE (Center of Excellence). We are currently in the process of moving this new tool from pilot to production so it can be further utilized across different accounts. I will be sure to report on its results.

Figure 2: Dell EMC’s IMS Value Stream Mapping Methodology

Summary

A best-in-class service delivery model entails a skilled set of Professional Service Managers who leverage best practices for achieving proactive support and sustaining continual service improvement while utilizing Lean IT disciplines. Like Dell EMC, an organization that fosters a culture for innovating, to turn “problems” into positive learning experiences will sustain health and a competitive edge.

The post Leveraging Lean IT for Service Delivery Optimization appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/gabriel_lopez/leveraging-lean-it-for-service-delivery-optimization/feed/ 0
Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I https://infocus.dellemc.com/matt-_liebowitz/best-practices-virtualizing-active-directory-domain-controllers-ad-dc-part-i/ https://infocus.dellemc.com/matt-_liebowitz/best-practices-virtualizing-active-directory-domain-controllers-ad-dc-part-i/#respond Mon, 17 Sep 2018 09:00:43 +0000 https://infocus.dellemc.com/?p=36080 Virtualized Active Directory is ready for Primetime! In today’s technology climate, monitoring for changes should be part of the organization’s security culture. Your IT team knows the importance of securing the network against data breaches from external threats, however, data breaches from inside the organization represent nearly 70% of all data leaks[1]. Are you doing […]

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I appeared first on InFocus Blog | Dell EMC Services.

]]>
Virtualized Active Directory is ready for Primetime!

In today’s technology climate, monitoring for changes should be part of the organization’s security culture. Your IT team knows the importance of securing the network against data breaches from external threats, however, data breaches from inside the organization represent nearly 70% of all data leaks[1].

Are you doing enough to prevent the data leaks? Enter Active Directory Domain Services (AD DS).

“Virtualize-First” Is the New Normal

Reasons to virtualize Active Directory Domain Controllers.

As the prominent directory service and authentication store, AD DS comprises the majority of network infrastructures, a business critical application (BCA). It provides the methods for storing directory data and making this data available to network users and administrators, thereby storing information about user accounts – names, passwords, phone numbers, etc. – and enables other authorized users on the same network to access this information.

In much the same way that the criticality of AD DS differs in organizations, so does the acceptance of virtualizing this service. More conservative organizations choose to virtualize a portion of the AD DS environment and retain a portion on physical hardware. This proclivity stems from the complexity of timekeeping in virtual machines, deviation from current build processes or standards, the ability to keep an AD Flexible Single Master Operations (FSMO) role physical, privilege escalation, and fear of a stolen .vmdk.

Figure 1: Common Objections to Domain Controller Virtualization

But fear not!

The release of Windows Server 2012 (and Windows Server 2016) and its virtualization-safe features and support for rapid domain controller deployment alleviates many of the legitimate concerns that administrators have about virtualizing AD DS. VMware® vSphere® and our recommended best practices also help achieve 100 percent virtualization of AD DS.

Best Practices for Active Directory (AD) Availability

Active Directory is the cornerstone to every environment   when Active Directory comes to a halt, everything connected does too.

Since many domain controller virtual machines may be running on a single VMware ESXI host, eliminating single points of failure and providing a high-availability solution will ensure rapid recovery. VMware provides solutions for automatically restarting virtual machines. If a VMware ESXi goes down, VMware High Availability (HA) can automatically restart a domain controller virtual machine on one of the remaining hosts, preventing loss of Active Directory. Using configuration options, you can prioritize the restart or isolation status for individual virtual machines. VMware also allows you to specify a priority for restarting virtual machines. For example, it is important for domain controllers functioning as global catalog servers to be online before your Exchange Server environment initializes. It is always a best practice to set your domain controller virtual machines as high-priority servers.

Additionally, you can implement a script to restart a virtual machine via a loss-of-heartbeat alarm through vCenter. You can accomplish this using a script (available with the VI Perl Toolkit or the VMware Infrastructure SDK 2.0.1) and combined with VMware Distributed Resource Scheduler (DRS), ensure that domain controllers from the same domain always reside on different VMware ESXi hosts to prevent placing all the domain controllers in one basket. The anti-affinity rules let you specify which domain controllers must stay together and which must be separated.

For guidance, follow Microsoft Operations Master Role Placement Best Practices or Dell EMC’s recommended practices.

Achieving Active Directory (AD) Integrity in Virtual Environments

Performing consistent system state backups eliminates hardware incapability when performing a restore, and ensures the integrity of the Active Directory database by committing transactions and updating database IDs. 

For success in implementing Active Directory in the virtual environment, you must ensure a successful migration from the physical environment to the virtual environment. Since Active Directory is heavily dependent on a transaction-based datastore, you must guarantee integrity by making sure there is a solid, reliable means of providing accurate time services to the PDC Emulator and other domain controllers throughout the Active Directory forest.

Network performance is another key to success in a virtual Active Directory implementation, since slow or unreliable network connections can make authentication difficult. Modifying DNS weight and priority to reduce load on the primary domain controller assists can help improve network performance. Because Active Directory depends on reliable replication, ensure continuity by using replmon to monitor it. Also, continue regular system state backups, and always restore from a system state backup. Virtual machines make it easy to move domain controllers; use VMware High Availability (HA) and VMware Distributed Resource Scheduler (DRS) so that no critical domain controllers are on a single host.

Practice the art of disaster recovery regularly. Finally, always go back and re-evaluate your strategies; monitor results for improvements and make adjustments when necessary.

Making Active Directory Confidential and Tamper-proof

Assessments in organizations that have experienced catastrophic or compromised events usually reveal they have limited visibility into the actual state of their IT infrastructures, which may differ significantly from their “as documented” states. These variances introduce vulnerabilities that expose the environment to compromise, often with little risk of discovery until the compromise has progressed to the point at which the attackers effectively “own” the environment.

Detailed assessments of these organizations’ AD DS configuration, public key infrastructures (PKIs), servers, workstations, applications, access control lists (ACLs), and other technologies reveal gaps in administrative practices, misconfigurations and vulnerabilities that, if remediated, could have prevented compromise and in extreme cases, prevented attackers from establishing a foothold in the AD DS environment.

See Microsoft’s Monitoring Active Directory for Signs of Compromise for further insights.

Figure 2: 4 tips for General Practices for Active Directory Confidentiality

Summary

There are several excellent reasons for virtualizing Windows Active Directory. Virtualization offers the advantages of hardware consolidation, total cost of ownership reduction, physical machine lifecycle management, mobility and affordable disaster recovery and business continuity solutions. It also provides a convenient environment for test and development, as well as isolation and security.

Stay tuned for part II of this blog series where I’ll address proper time and synchronization with virtualized AD DC, replication, latency and convergence; preventing and remediating lingering objects, cloning, and disaster recovery.

Please reach out to your Dell EMC representative or checkout Dell EMC Consulting Services to learn how we can help you with virtualizing AD DS or leave me a comment below and I’ll be happy to respond back to you.

Sources

Virtualizing a Windows Active Directory Domain Infrastructure

Microsoft’s Avenues to Compromise

[1] Statista.com Data Breaches Recorded in the U.S. by Number of Breaches and Records Exposed

The post Best Practices for Virtualizing Active Directory Domain Controllers (AD DC), Part I appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt-_liebowitz/best-practices-virtualizing-active-directory-domain-controllers-ad-dc-part-i/feed/ 0
Accelerating Exploratory Analytics with Big Data as a Service https://infocus.dellemc.com/matt_maccaux/accelerating-exploratory-analytics-with-big-data-as-a-service-bdaas/ https://infocus.dellemc.com/matt_maccaux/accelerating-exploratory-analytics-with-big-data-as-a-service-bdaas/#respond Tue, 11 Sep 2018 12:55:21 +0000 https://infocus.dellemc.com/?p=36114 In today’s digital age, the Big Data landscape is rapidly evolving for both data science and IT teams—with a steady stream of new products, tools and frameworks being released and incorporated into an already complex ecosystem. Data scientists and developers want flexibility and choice, with on-demand access to new Big Data technologies such as machine […]

The post Accelerating Exploratory Analytics with Big Data as a Service appeared first on InFocus Blog | Dell EMC Services.

]]>
In today’s digital age, the Big Data landscape is rapidly evolving for both data science and IT teams—with a steady stream of new products, tools and frameworks being released and incorporated into an already complex ecosystem. Data scientists and developers want flexibility and choice, with on-demand access to new Big Data technologies such as machine learning and artificial intelligence. IT managers are under pressure to support these new innovations and the ever-changing menagerie of tools, while also providing enterprise grade IT security and control. Meanwhile, the demands from the business for new analytical capabilities is growing faster than the organization can support.

Under these conditions, it has become increasingly difficult for enterprises to keep up with the pace of change in Big Data. Traditional deployment methodologies and architectures using bare metal servers with direct-attached storage for data lakes can quickly become disk/storage-constrained as an organization’s use of the data expands. As nodes/servers are added, the management overhead becomes costly and inefficient, not to mention the costs of the servers themselves.

A common approach to address this problem is to spread the data around across multiple Hadoop® clusters. However, with the rapid growth of data, this also becomes inefficient to maintain, and even more so if copies of data reside on multiple clusters. As clusters proliferate and the number of analytics and data science applications and tools increases, enforcing access restrictions and policies can also be challenging as the environment scales.

Additionally, the time-consuming nature of manually building a new environment for each user—to acquire a compute node with storage, install the operating system, install the Hadoop version and applications, patch, test and deploy, and then secure all of those components—can compound the chances of errors and cause costly delays to the business.

Our Approach to Exploratory Analytics

In previous blogs, we’ve talked a lot about our Elastic Data Platform.  The Elastic Data Platform is a proven and cost-effective solution that enables organizations to address these challenges at speed and scale for exploratory analytics and flexible workloads. The integrated solution is designed to extend and augment an organization’s existing Big Data investments with workload-specific infrastructure, intelligent software, and end-to-end automation and is delivered by Dell EMC Consulting to accelerate the time to value.  Check out the short video below to learn more.

New Ready Solutions for Big Data

If you are looking to upgrade your Big Data infrastructure, Dell EMC has you covered with newly announced Ready Solutions for Big Data and a Big Data as a Service (BDaaS) design. The pre-engineered, integrated solutions include the Dell EMC best-of-breed servers and networking, BlueData EPIC software, and services to fast-track and simplify your analytics journey with a secure, on-premise Big Data as a Service capability.  The solution leverages core components of the Elastic Data Platform, including cluster deployment and multi-tenancy, enabling self-service analytics, provisioning in minutes, and tooling flexibility.

Experts Every Step of the Way

The Ready Solutions for Big Data include consulting, deployment and support services from Dell EMC Services to help customers drive the rapid adoption and optimization of solution in their Big Data environments from initial set up and integration through to ongoing support and roadmap planning.

During a 6-week engagement, Dell EMC consultants work with you to identify the analytics use case that will have the most business impact, gather requirements and design the solution architecture. Our teams then install, configure and integrate the Ready Solution into the customer’s environment for the prioritized use case. This includes tying into the existing security framework (e.g., Active Directory), connecting to existing Hadoop systems, and developing custom application package templates for data scientists, analysts, and engineers to get a fast start with the solution.

Dell EMC Consulting will also run workshops and develop a roadmap for how the BDaaS solution can be extended to the rest of the enterprise to include consolidating systems and infrastructure, centralizing the data lake, and offering end-to-end automated provisioning leveraging the existing IT ticketing systems (e.g., ServiceNow).

Once the Ready Solution is implemented, Dell EMC ProSupport provides comprehensive hardware and collaborative software support to help ensure optimal system performance and minimize downtime.  Customers can also opt for ProSupport Plus to get a Technology Service Manager who provides a single point-of-contact for support.  And, Dell EMC Education Services offers a range of courses and certifications on Data Science and Advanced Analytics which is a great option for training teams to use the new technologies.

Getting Started

The new Dell EMC Ready Solutions for Big Data offer a compelling way to fast-track and simplify Big Data as a Service and exploratory analytics using Dell EMC’s world class technology and proven expertise and services. We also offer the accelerated 6-week Big Data as a Service implementation option standalone or can deliver a full enterprise-wide solution.

Are you ready to harness the power of big data and analytics to transform your organization? To learn more, reach out to your Dell EMC representative or checkout Dell EMC Consulting Services to learn how we can help you get started on your transformation.

The post Accelerating Exploratory Analytics with Big Data as a Service appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/matt_maccaux/accelerating-exploratory-analytics-with-big-data-as-a-service-bdaas/feed/ 0