Bill Schmarzo – InFocus Blog | Dell EMC Services https://infocus.dellemc.com DELL EMC Global Services Blog Wed, 21 Feb 2018 14:18:07 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.2 Dell EMC Services Podcasts Bill Schmarzo – InFocus Blog | Dell EMC Services clean episodic Bill Schmarzo – InFocus Blog | Dell EMC Services casey.may@emc.com casey.may@emc.com (Bill Schmarzo – InFocus Blog | Dell EMC Services) Dell EMC Services Podcasts Bill Schmarzo – InFocus Blog | Dell EMC Services /wp-content/plugins/powerpress/rss_default.jpg https://infocus.dellemc.com The 3% Edge: How Data Drives Success in Business and the Olympics https://infocus.dellemc.com/william_schmarzo/at-the-3-edge-how-data-drives-success-in-business-and-the-olympics/ https://infocus.dellemc.com/william_schmarzo/at-the-3-edge-how-data-drives-success-in-business-and-the-olympics/#respond Tue, 20 Feb 2018 10:00:41 +0000 https://infocus.dellemc.com/?p=34133 A recent Bloomberg BusinessWeek article entitled “The Tech Guy Building Wearables for America’s Olympians” profiles Mounir Zok, the man in charge of the U.S. Olympic Committee’s technology and innovation. The article discusses how Mr. Zok is bringing a Silicon Valley data scientist mentality to help America’s Olympic athletes more effectively leverage data and analytics to […]

The post The 3% Edge: How Data Drives Success in Business and the Olympics appeared first on InFocus Blog | Dell EMC Services.

]]>
A recent Bloomberg BusinessWeek article entitled “The Tech Guy Building Wearables for America’s Olympians” profiles Mounir Zok, the man in charge of the U.S. Olympic Committee’s technology and innovation. The article discusses how Mr. Zok is bringing a Silicon Valley data scientist mentality to help America’s Olympic athletes more effectively leverage data and analytics to win Olympic medals.

To quote the article:

Zok won’t say who his partners were in the development process or even which athletes are using the suits; any hints might tip off Olympic engineers in other countries, erasing the USOC’s advantage.“I call it the 1 percent question,” he says.“Olympic events typically come down to a 1 percent advantage. So what’s the one question that, if we can provide an answer, will give our athletes that 1 percent edge?

Wait a second, what is this “1% edge,” and is that something that we can apply to the business world? I wanted to drill into this “1% edge” to not only verify the number, but to further understand how the “1% edge” might apply to organizations trying to effectively leverage data and analytics to power their businesses (see “Demystifying the Big Data Business Model Maturity Index”).

Verifying the 1% Edge

To start validating this 1% edge, I evaluated single athlete sports, where focusing on the singular performer is easier than a team sport. Here’s what I found.

Professional Golf. The top 5 worldwide professional golfers (as measured by strokes per round) are only 3 percent better than players #96 – #100. Even more amazing is that while the top 5 professional golfers are only separated by 3 percent in their stroke average, from golfers #96 through #100, the golfers ranked #96 – #100 earned 89.5 percent less than the top 5 (see Figure 1)!

Figure 1: YTD Statistics, Farmers Insurance Open, January 28, 2018

The 3 percent edge is quite evident in golf. Three strokes can be the difference between victory and defeat, and it also demonstrates the disparity in earning potential.

2016 Olympics Men’s Track. Next I looked at the 2016 Olympics men’s track events: 100 meter dash, 400 meter dash and marathon. The difference between the very best and those dreaming of gold medals was again only a small percentage, specifically fractions of seconds in sprinting events.

Figure 2: 2016 Olympic Men’s 100 Meter Results

 

Figure 3: 2016 Olympic Men’s 400 Meter Results

 

Figure 4: 2016 Olympic Men’s Marathon Results

In summary:

  • The difference between a gold medal and no medal was between 1.22% to 2.28%
  • The difference between a gold medal and 8th place IN THE OLYMPICS was between 2.40% to 3.67%

Think about the years of hard work and commitment these world-class athletes put into preparing for these events, only to finish out of the medals by approximately 2%. So while the “1% edge” may not be entirely accurate, I think a 1% to 3% difference on average looks about right for athletes (and organizations) that want to be considered world class.

Applying the 3% Edge to Become World Class

What does a 3 percent edge mean to your business? What does it mean to be 3 percent better in retaining customers, or bringing new products to market, or reducing hospital readmissions, or preventing unplanned maintenance?

While I couldn’t find any readily available metrics about world class in these business areas, I came back to the seminal research from Frederick F. Reichheld and W. Earl Sasser, Jr. highlighted in the classic “Harvard Business Review” article “Zero Defections: Quality Comes to Services” written in 1990. The bottom line from their research: increasing customer retention rates by 5% increases profits by 25% to 95% (see Figure 5).

Figure 5: Profitability Impact from a 5% Increase in Customer Retention

When these research results were published in 1990, they startled so many marketing executives that it set off a rush to acquire Customer Relationship Management (CRM) applications like Siebel Systems.

The Power of Compounding 1% Improvements

One of the most powerful concepts behind “hitting lots of 3 percent singles versus a single 20 percent homerun” is the concept of compounding. So what does a “3 percent compounding” actually look like? Let’s walk through a fraud example.

Let’s say you have a $1 million run-rate business with an annual 10 percent fraud rate. That results in $100K in annual fraud losses. What if, through the use of advanced analytics, you were able to reduce the fraud fate by 3 percent each year? What is the cumulative effective of a 3 percent annual improvement over five and 10 years?

Figure 6: 3% Compounded Impact on Fraud

While the results start off pretty small, it doesn’t take much time until the compounding and cumulative effects of a 3 percent improvement provide a significant financial return. And though it may not make much sense to look beyond five years (due to customer turnover, technology, evolving competition and market changes), even at five years the financial return is significant.

Take it a step further and consider the impact when combining multiple use cases, such as:

  • Waste and spoilage reduction
  • Energy effectiveness
  • Preventative maintenance
  • Unplanned network or grid downtime
  • Hospital acquired infections
  • Unplanned Hospital readmissions
  • Power outages
  • Delayed deliveries

A business that acquires a 3 percent compounding effect across numerous use cases begins to look like a business that can achieve compound growth.

Summary

I believe there is tremendous growth opportunity for organizations that have the data and analytical disciplines to drill into what a 3 percent improvement in performance might mean to the overall health of their business. Such analysis would not only highlight the power of even small improvements, but offer clarity into what parts of the business should be prioritized for further acceleration.

Sources:

Table 1: YTD Statistics, Farmers Insurance Open, January 28, 2018

The post The 3% Edge: How Data Drives Success in Business and the Olympics appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/at-the-3-edge-how-data-drives-success-in-business-and-the-olympics/feed/ 0
Don’t Follow the Money; Follow the Customer! https://infocus.dellemc.com/william_schmarzo/dont-follow-the-money-follow-the-customer/ https://infocus.dellemc.com/william_schmarzo/dont-follow-the-money-follow-the-customer/#respond Wed, 14 Feb 2018 10:00:45 +0000 https://infocus.dellemc.com/?p=33843 “Mr. Schmarzo, we’ve noticed that your cholesterol count is at 210, so we have prescribed Fluvastatin and placed selected foods into your shopping basket to help you control your cholesterol. Complete the purchase by selecting ‘here’ and we’ll deliver the medication and groceries to your home today between 4:00 and 4:20pm.  If you complete the […]

The post Don’t Follow the Money; Follow the Customer! appeared first on InFocus Blog | Dell EMC Services.

]]>
“Mr. Schmarzo, we’ve noticed that your cholesterol count is at 210, so we have prescribed Fluvastatin and placed selected foods into your shopping basket to help you control your cholesterol. Complete the purchase by selecting ‘here’ and we’ll deliver the medication and groceries to your home today between 4:00 and 4:20pm.  If you complete the full Fluvastatin prescription, then we’ll reduce your monthly healthcare insurance payment by 5%.”

This scenario is surprisingly close to reality as mergers cause traditional healthcare industry borders (healthcare provider, healthcare payer, pharmacy) to crumble.  A recent BusinessWeek article “CVS Brings One-stop Shopping to Health Care” highlighted the potential benefits of a vertical consolidation of the healthcare ecosystem players:

The [CVS – Aetna] deal would create a behemoth that would try to shift some of Aetna customers’ care away from doctors and hospitals and into thousands of CVS stores. “Think of these stores as a hub of a new way of accessing health-care services across America,” says CVS Chief Executive Officer Larry Merlo. “We’re bringing health care to where people live and work.”

Healthcare value chain vertical integrations could provide substantial benefits to everyday consumers and patients alike, including:

  • An accelerated move to value-based care (focusing on preventive care) and away from the traditional “pay by the service” model (which rewards healthcare participants for more care)
  • A reduction in some of the dysfunctional incentives built into today’s healthcare value chain, such as pharmacy benefit managers (PBMs) profiting from back-end rebates and fees extracted from pharmacy companies

Superior understanding of customers’ behaviors and preferences and product usage patterns form the basis for industry transformations.

New Normal: Business Model Disintermediation and Disruption

Industry after industry is under attack by upstart disruptors and no industry is safe. The basis for their attack is exploiting and monetizing superior customer product preferences and buying habits.The more these disruptors know about their customers – their preferences, behaviors, tendencies, inclinations, interests, passions, associations, affiliations – the better positioned they are to create new sources of value and revenue (see Figure 1).

Figure 1: Business Model Disruption and Customer Disintermediation

Established companies are being attacked by companies that are more effective at leveraging big data technologies, new sources of customer, product and operational data, and advanced analytics (machine learning, deep learning, and artificial intelligence) to:

  • Disrupt business models by applying customer, product, operational and market insights to optimize key business and operational processes. Additionally, data-driven insights uncover new sources of revenue such as new products, services, markets, audiences, channels, partners, etc.
  • Disintermediate customer relationships by exploiting detailed customer engagement behaviors and product usage tendencies to provide a more compelling and differentiated user experience.

Check out “The New Normal: Big Data Business Model Disintermediation and Disruption” for more details on business model disruption and customer disintermediation.

The following companies are challenging traditional industry business models with superior customer preferences and buying habits:

  • Uber: The world’s largest taxi company owns 0 taxis
  • Airbnb: The largest accommodation provider does not own real estate
  • TripAdvisor: The world’s largest travel company owns 0 inventory
  • Skype, Whatsapp, WeChat: The largest phone companies do not own any telco infrastructure
  • SocietyOne: The fastest growing bank has no actual money
  • eBay: One of the world’s most valuable retailer has no inventory
  • Apple & Google: The largest software vendors write a minimal number of apps
  • Facebook: The most popular media owner does not create content
  • Netflix: The world’s largest movie house does not own any cinemas or create any content (until recently)

Industry transformations will only accelerate because leading companies realize that instead of “following the money,” they should “follow the customers.

Follow the Customer

“Follow the money” is a catchphrase used to understand an organization’s flow of the money and sources of value. Organizations use accounting, auditing, investigative, data and analytic skills to “follow the money” and determine their financial value

However this infatuation with following the money can actually lead organizations astray, and make the vulnerable to disruption and disintermediation from more nimble, more customer-focused organizations. Such organizations operate in industries where:

  • The market is too fragmented for any one organization to provide a complete customer solution and experience.
  • Customer experiences are unsatisfactory.
  • Customer outcomes are questionable, or are downright wrong.
  • “Product Mentality” permeates the Senior Executive team

For example, Amazon is vertically integrating the grocery industry with their recent acquisition of Whole Foods. Where Amazon plans to take the grocery industry (as well as the entire retail industry) starts with their mission statement:

  • Traditional Grocer: “Our goal is to be the first choice for those customers who have the opportunity to shop locally”
  • Amazon: “To be Earth’s most customer-centric company, where customers can find and discover anything they might want to buy online, at get those items quickly, at the lowest possible prices”

Amazon enhances and simplifies the customer-centric experience with a host of simple, easily accessible user experience choices such as one-click buying, mobile ordering, free and same day delivery, and more.

Check out “What is Digital Transformation?” for examples of how Amazon is leveraging customer insights to vertically integrate the grocery industry.

Optimizing the Customer Experience

80% of customers want a personalized experience from their retailer. Customers don’t want to be treated as numbers on a fact sheet and love it when organizations show a semblance of personalization towards them[1].

Providing a more holistic, more engaging customer experience starts with understanding each individual customer’s behaviors., tendencies, inclinations, biases, preferences, patterns, interests, passions, associations and affiliations.  More than just capturing the customer’s purchase and social media data, leading customer-centric organizations uncover and codify 1) what products and services a customer tends to buy and 2) what products and services customers like them buy.

Amazon is arguably the industry leader in providing a highly personalized customer experience that starts with their recommendation engine (see Figure 2).

Figure 2: Amazon Recommendation Engine

Amazon recently open-sourced their artificial intelligence framework (DSSTNE: Deep Scalable Sparse Tensor Neural Engine) that powers their recommendation engine.  Amazon’s product catalog is huge, making their purchase transactions datasets extremely sparse.  This creates a significant challenge for traditional neural network frameworks, so Amazon created DSSTNE to generate recommendations that power personalized experiences across the Amazon website and Amazon devices[2].

Dell EMC Consulting uses Analytic Profiles to capture and codify a customer’s behaviors, tendencies, inclinations, biases, preferences, patterns, interests, passions, associations and affiliations (see Figure 3).

Figure 3: Customer Analytic Profile

See “Analytic Profiles: Key to Data Monetization” for more details on Analytic Profiles.

Organizations can leverage customer insights captured in the Analytic Profiles to optimize key business and operational processes, reduce security and compliance risks, uncover new revenue opportunities, and create a more compelling customer engagement lifecycle (see Figure 4).

Figure 4: Optimizing the Customer Lifecycle

See “Optimizing the Customer Lifecycle With Customer Insights” for more details on leveraging big data and data analytics to optimize your customer’s lifecycle.

Follow the Customer Summary

Leading organizations are realizing that instead of “following the money” that they should be “following their customers” and mining their customers’ buying habits regardless of artificially defined industry boundaries (see Figure 5).

It is these customer insights that will transform the organization’s business models, disintermediate under-served customers, create new sources of revenue, and eventually transform the business into an intelligent enterprise.

Sources:

[1]Generating Recommendations at Amazon Scale with Apache Spark and Amazon DSSTNE

[2]Retail: How to Keep it Personal & Take Care of Privacy

 

The post Don’t Follow the Money; Follow the Customer! appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/dont-follow-the-money-follow-the-customer/feed/ 0
How State Governments Can Protect and Win with Big Data, AI and Privacy https://infocus.dellemc.com/william_schmarzo/how-state-governments-can-protect-and-win-with-big-data-ai-and-privacy/ https://infocus.dellemc.com/william_schmarzo/how-state-governments-can-protect-and-win-with-big-data-ai-and-privacy/#respond Tue, 06 Feb 2018 10:00:18 +0000 https://infocus.dellemc.com/?p=33527 I was recently asked to conduct a 2-hour workshop for the State of California Senior Legislators on the topic of “Big Data, Artificial Intelligence and Privacy.” Honored by the privilege of offering my perspective on these critical topics, I shared with my home-state legislators how significant opportunities await the state. I reviewed the once-in-a-generation opportunities […]

The post How State Governments Can Protect and Win with Big Data, AI and Privacy appeared first on InFocus Blog | Dell EMC Services.

]]>
I was recently asked to conduct a 2-hour workshop for the State of California Senior Legislators on the topic of “Big Data, Artificial Intelligence and Privacy.” Honored by the privilege of offering my perspective on these critical topics, I shared with my home-state legislators how significant opportunities await the state. I reviewed the once-in-a-generation opportunities awaiting the great State of California (“the State”), where decision makers could vastly improve their constituents’ quality of life, while creating new sources of value and economic growth.

Industrial Revolution Learnings

We have historical experiences and references to revisit in discerning what the government can do to nurture our “Analytics Revolution.” Notably, the Industrial Revolution, holds many lessons regarding the consequences of late and/or confusing government involvement and guidance (see Figure 1).

Figure 1: Lessons from the Industrial Revolution

 

Government’s role in the “Analytics Revolution” is clear: to carefully nurture and support industry, university, and government collaboration to encourage sustainable growth and prepare for massive changes and opportunities. The government can’t afford to stand by and let the markets decide. By the time the markets have decided, it may be too late to redirect and guide resources, especially given the interests of Russia and China in this all-important science.

Be Prepared to Action on the Nefarious

Access to sensitive information, data protection, privacy – these are all hot button issues with the citizenry. The State must be aware of the society and cultural risks associated with the idea of a “Big Brother” shadowing its people. The State must champion legislation in cooperation with industry in order to protect the masses, while not stifling creativity and innovation. That’s a tough job, but the natural conflict between “nurturing while protecting” is why the government needs to be involved early. Through early engagement, the State can then reduce concern between industrial growth and personal privacy.

The “Analytics Revolution” holds tremendous promise for the future of industry and personal achieve, but will require well-defined rules of conduct and engagement. Unsupervised growth or use may lead to information being exploited in nefarious ways with potentially damaging results.

The State must protect its constituents’ sensitive information while nurturing the industrial opportunity. That’s a tall order, but nothing less should be expected from our government, industry and society leaders.

Can’t Operate in a World of Fear

We can’t be afraid of what we don’t know. The State must increase constituents’ awareness and education of Big Data and Artificial Intelligence; what they are, what they are used for and the opportunities locked within including “The Good, the Bad, and the Ugly.”

We can’t operate in a world of fear; jump to conclusions based upon little or no information, or worse yet, misinformation or purposeful lies. Government leaders must collaborate with industry and universities to actively gain understanding of the true ramifications and capabilities of Big Data and Artificial Intelligence, before they create legislation (see Figure 2).

Figure 2: Government Leaders Must Seek Information before Jumping to Conclusions

 

It’s because I’m an educator in this field that I was so honored to be part of this discussion. In addition to discussing the economic opportunities that lie within Big Data and Artificial Intelligence, I wanted to help our legislators understand they should prioritize their own learning and education of these sciences before enacting rules and regulations.

Predict to Prevent

The opportunities for good are almost overwhelming at the government level! Whether in education, public services, traffic, fraud, crime, wild fires, public safety or population health, Big Data and Artificial Intelligence can dramatically improve outcomes while reducing costs and risks (see Figure 3).

Figure 3: Big Data and AI Reducing Crop Loss to Diseases

 

However, to take advantage of the potential of Big Data and Artificial Intelligence, The State, its agencies, and its legislators need to undergo a mind shift. They need to evolve beyond “using data and analytics to monitor agency outcomes” to understanding how to “leverage data and analytics to Predict, to Prescribe and to Prevent!”  That is, these organizations need to evolve from a mindset of reporting what happened to a mindset of predicting what’s likely to happen and prescribing corrective or preventative actions or behaviors (see Figure 4).

Figure 4: The “Predict to Prescribe to Prevent” Value Chain

 

There are numerous use cases of this “predict to prevent” value chain that will not only benefit state agencies’ operations, but also have positive and quality of life ramifications to the residents of California including the opportunity to prevent:

  • Hospital acquired infections
  • Crime
  • Traffic Jams / vehicle accidents
  • Major road maintenance
  • Cyber attacks
  • Wild fires
  • Equipment maintenance and failures
  • Electricity and utility outages
  • And more…

Role of Government

The role of government is to nurture, not necessarily to create, especially in California. California is blessed with a bounty of human capital resources including an outstanding higher education system and an active culture of corporate investing such as the Google $1B AI Fund (see “Google Commits $1 Billion In Grants To Train U.S. Workers For High-Tech Jobs”).

There is a bounty of free and low-cost Big Data and Artificial Intelligence training available. For example, Andrew Ng, one of the world’s best-known artificial-intelligence experts, is launching an online effort to create millions more AI experts across a range of industries. Ng, an early pioneer in online learning, hopes his new deep-learning course on Coursera will train people to use the most powerful idea to have emerged in AI in recent years.

California sits in rarified air when it comes to the volume of natural talent in the Big Data and Artificial Intelligence spaces. The State should seize on these assets, coordinate all of these valuable resources and ensure that this quality and depth of training is available to all.

State of California Summary

In summarizing what I told my audience, Big Data and Artificial Intelligence provide new challenges, but the opportunities for both private and public sectors are many. To harness the power of Big Data and AI, the State should focus on:

  • Minimizing impact of nefarious, illegal and dangerous activities
  • Balancing Consumer value vs. Consumer exploitation
  • Addressing inequities in data monetization opportunities
  • Re-tooling / Re-skilling the California workforce
  • Fueling innovation via university-government-business collaboration
  • Adopt regulations for ensuring citizen/customer fairness (share of the wealth)
  • Providing incentives to accelerate state-wide transformation and adoption

Figure 5: Threats to the California “Way of Life”

 

It is up to everyone — the universities, companies, and individuals — to step up and provide guidance to our government and education leaders to keep California at the forefront of our “Analytics Revolution.” This is one race where there is no silver medal for finishing second.

The post How State Governments Can Protect and Win with Big Data, AI and Privacy appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/how-state-governments-can-protect-and-win-with-big-data-ai-and-privacy/feed/ 0
The Consumerization of Artificial Intelligence https://infocus.dellemc.com/william_schmarzo/the-consumerization-of-artificial-intelligence/ https://infocus.dellemc.com/william_schmarzo/the-consumerization-of-artificial-intelligence/#respond Wed, 31 Jan 2018 10:00:05 +0000 https://infocus.dellemc.com/?p=33512 Consumerization is the design, marketing, and selling of products and services targeting the individual end consumer. Apple CEO Tim Cook recently promoted a $100-per-year iPhone app called Derm Expert. Derm Expert allows doctors to diagnose skin problems using only their iPhone.  Doctors take a photo of a patient’s skin condition and then Derm Expert diagnoses […]

The post The Consumerization of Artificial Intelligence appeared first on InFocus Blog | Dell EMC Services.

]]>
Consumerization is the design, marketing, and selling of products and services targeting the individual end consumer.

Apple CEO Tim Cook recently promoted a $100-per-year iPhone app called Derm Expert. Derm Expert allows doctors to diagnose skin problems using only their iPhone.  Doctors take a photo of a patient’s skin condition and then Derm Expert diagnoses the problem and prescribes treatment. Doctors can effectively treat patients without a high performance computer or an expensive technology environment. They just need the same iPhone that you and I use every day.

Figure 1: Derm Expert App

 

Derm Expert makes use of Apple’s Core ML framework that is built into all new iPhones. Core ML makes it possible to run Machine Learning and Deep Learning algorithms on an iPhone without having to upload the photos to the “cloud” for processing.

Apple is not the only company integrating Machine Learning and Deep Learning frameworks into their products, but it may be the first company to put such a powerful capability into the hands of millions of consumers. Whether we know it or not, we have all become “Citizens of Data Science,” and the world will never be the same.

Embedding Machine Learning Frameworks

Apple Core ML in the iPhone is an example of how industry leaders are seamlessly embedding powerful machine learning, deep learning, and artificial intelligence frameworks into their development and operating platforms. Doing so enables Apple IOS developers to create a more engaging, easy-to-use customer experience, leveraging Natural Language Processing (NLP) for voice-to-text translation (Siri) and Facial recognition. Plus, it opens the door for countless new apps and use cases that can exploit the power of these embedded frameworks.

Core ML enables developers to integrate a broad variety of machine learning algorithms into their apps with just a few lines of code. Core ML supports over 30 deep learning (neural network) algorithms, as well as Support Vector Machine (SVM) and Generalized Linear Models (GLM)[1].

For example,

  • Developers can integrate computer vision machine learning features into their app including face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking and image registration.
  • The natural language processing APIs in Core ML use machine learning to decipher text using language identification, tokenization, lemmatization and named entity recognition.

Core ML supports Vision for image analysis, Foundation for natural language processing, and GameplayKit for evaluating learned decision trees (see Figure 2).

Figure 2: Core ML Is Optimized for On-Device Performance, Which Minimizes Memory Footprint and Power Consumption

 

Machine Learning and Deep Learning Microprocessor Specialization

Artificial intelligence, machine learning and deep learning (AI | ML | DL) require massive amounts of computer processing power. And while the current solution is just to throw more processors at the problem, eventually that solution won’t scale as the processing needs and the volume of detailed, real-time data increase3.

One of the developments leading to the consumerization of artificial intelligence is the ability to exploit microprocessor or hardware specialization. The traditional Central Processing Unit (CPU) is being replaced by special-purpose microprocessors built to execute complex machine learning and deep learning algorithms.

This includes:

  • Graphics Processing Unit (GPU): a specialized electronic circuit designed to render 2D and 3D graphics together with a CPU. It is also known as a graphics card in the gamer’s culture. Now GPUs are being harnessed more broadly to accelerate computational workloads in areas such as financial modeling, cutting-edge scientific research, deep learning, analytics, and oil and gas exploration etc.
  • Tensor Processing Unit (TPU): a custom-built integrated circuit developed specifically for machine learning and tailored for TensorFlow (Google’s open-source machine learning framework).TPU is designed to handle common machine learning and neural networking calculations for training and inference, specifically: matrix multiply, dot product, and quantization transforms. On production, AI workloads that utilize neural network inference, the TPU is 15 times to 30 times faster than contemporary GPUs and CPUs, according to Google.

Intel is designing a new chip specifically for Deep Learning called the Intel® Nervana™ Neural Network Processor (NNP)[4]. The Intel Nervana NNP supports deep learning primitives such as matrix multiplication and convolutions. Intel Nervana NNP enables better memory management for Deep Learning algorithms to achieve high levels of utilization of the massive amount of compute on each die.

The bottom-line translates to achieving faster training time for Deep Learning models.

Finally, a new company called “Groq” is building a special purpose chip that will run 400 trillion operations per second, more than twice as fast as Google’s TPU[5].

What do all these advancements in GPU and TPU mean to you the consumer?

“Smart” apps that leverage these powerful processors and the embedded AI | ML | DL frameworks to learn more about you to provide a hyper-personalized, prescriptive user experience.

It’ll be like a really smart, highly attentive personal assistant on steroids!

The Power of AI in Your Hands

Unknowingly over the past few years, artificial intelligence worked its way into our everyday lives. Give a command to Siri or Alexa and AI kicks in to translate what you said and look up answer. Upload a photo to Facebook and AI identifies the people in the photo. Enter a destination into Waze or Google Maps and AI provides updated recommendations on the best route. Push a button and AI parallel parks your car all by itself (dang, where was that during my driver’s test!).

With advances in computer processors and embedded AI | ML | DL frameworks, we are just beginning to see the use cases. And like the Derm Expert app highlights, the way that we live will never be the same.

Sources:

[1]Build More Intelligent Apps With Machine Learning

Figure 2: Core ML

[3]Are limitations of CPU speed and memory prevent us from creating AI systems

[4]Intel® Nervana™ Neural Network Processors (NNP) Redefine AI Silicon

[5]Groq Says It Will Reveal Potent Artificial Intelligence Chip Next Year

The post The Consumerization of Artificial Intelligence appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/the-consumerization-of-artificial-intelligence/feed/ 0
How the Eagles and Patriots Can Avoid a Championship Let Down: Play the Percentages https://infocus.dellemc.com/william_schmarzo/how-the-eagles-and-patriots-can-avoid-a-super-bowl-lii-let-down-play-the-percentages/ https://infocus.dellemc.com/william_schmarzo/how-the-eagles-and-patriots-can-avoid-a-super-bowl-lii-let-down-play-the-percentages/#respond Mon, 29 Jan 2018 10:00:33 +0000 https://infocus.dellemc.com/?p=33945 It’s Super Bowl LII week and the Philadelphia Eagles and New England Patriots are on the precipice of a championship. One game to decide it all, where one decision could swing the fortunes of either team, depending on a single play call. What can Bill Belichick or Doug Pederson do to avoid a letdown or […]

The post How the Eagles and Patriots Can Avoid a Championship Let Down: Play the Percentages appeared first on InFocus Blog | Dell EMC Services.

]]>

Figure 1: ESPN NFL Gamecast

It’s Super Bowl LII week and the Philadelphia Eagles and New England Patriots are on the precipice of a championship. One game to decide it all, where one decision could swing the fortunes of either team, depending on a single play call. What can Bill Belichick or Doug Pederson do to avoid a letdown or propel his team to victory?

It’s simple – play the percentages.

We only need to look back one year to observe how a coach’s thought process may have prevented his team from claiming a Super Bowl title.

It was Super Bowl LI. Tom Brady had just thrown an interception that the Atlanta Falcons returned for a touchdown. Atlanta held a 21 – 0 lead with 2:21 left in the first half. By ESPN’s projections, the Falcons at this point in the game had a 96.6% probability of winning.

Jump to the third quarter, the Patriots, then trailing 28-3, faced fourth-and-3 at their own 46-yard line with 6:04 on the clock. Tom Brady completed a pass to Danny Amendola for 17 yards. This single play, yielding a Patriots first down, extended the Patriots offensive drive and increased their “Win Expectancy” from 0.2 percent to 0.5 percent (+0.3 percent increase).

It wasn’t until the Patriots scored with 57 seconds remaining in the game to force overtime that they rose from handicappers’ deathbed and evened the game’s win probabilities (see Figure 2). The Patriots ultimately won the game in overtime, overcoming seemingly insurmountable odds.

Understanding Probabilities to Win

How did the New England Patriots achieve such an unlikely comeback? Or maybe more relevant – how could the Atlanta Falcons commit such a mind-boggling, unprecedented choke job?

Figure 2: Patriots versus Flacons Win Expectancy Super Bowl LI

 

Let’s look to the card table to learn how basic probabilities can help humans make better decisions. From “A Blackjack Pro Explains How Ignoring the Odds Cost the Falcons the Super Bowl”, each decision in blackjack can be dictated by simple probabilities. The average blackjack player loses about 3 percent of his or her money. However if probabilities are played correctly, the house’s edge reduces to about 0.5 percent.  Unfortunately, even when humans know the right action to take, very few people actually play the probabilities correctly because humans are overwhelmed with cognitive biases.

Like casinos’ algorithms that determine the odds and outcomes of everything from slot machines to roulette, NFL front offices would benefit from applying Machine Learning to analyze thousands of football games played over the past 10 years. They could then analyze all possible situations and calculate the probability of each outcome. Then all a coach needs to do is to follow the math. But like in blackjack, it can be hard to stay focused on a statistical-based strategy under the stress and excitement of the moment.

Up 28-9 with two minutes left in the third quarter, the Atlanta Falcons had a 99 percent chance to win Super Bowl LI, but then the Falcons ignored simple probabilities that compounded bad decisions:

  • First, Atlanta quarterback Matt Ryan did not let the play clock run down to fewer than 10 seconds on every Falcons offensive possession.
  • Second, by not running the football once they had a late lead (Falcons were gaining an above-average 5.8 yards per rush), they allowed the clock to stop on incomplete passes.

Both of these decisions gave the Patriots – and Tom Brady – more time to get back into the game. All Atlanta needed to do was execute a simple “run the ball” strategy and reduce the number of Patriots’ offensive possessions by one. Unfortunately for the Atlanta Falcons, their decision making was akin to hitting on a 15 in blackjack when the dealer had a six showing. The Falcons ignored basic probabilities and the result was the biggest turnaround in Super Bowl history…at their expense.

Humans Aren’t Good at Making Decisions

Human decision-making capabilities have evolved from millions of years of survival on the savanna. Necessity dictated that we become very good at recognizing patterns and making quick, instinctive survival decisions based upon those patterns (see the blog “Human Decision-Making in a Big Data World”).

Unfortunately, humans are lousy number crunchers. Consequently, humans have learned to rely upon heuristics, rules of thumb, anecdotal information, intuition and “gut” as our decision guides.

So what can Bill Belichick or Doug Pederson do to overcome our natural decision-making liabilities and avoid their teams from becoming the next Atlanta Falcons? It starts with acknowledging and understanding our inherent decision-making or cognitive bias flaws.

Awareness is the starting point and while I could easily write a book on the subject, let’s cover a few of the more common decision-making traps with recommendations on how to manage around these traps.

Type of Human Biases or Decision Traps

Trap: Over-confidence

Over-confidence is when a decision maker places a greater weight or value on what they know and assumes that what they don’t know isn’t important.

Corrective Action: The Falcons entered the Super Bowl with the second-ranked passing offense in the NFL in 2016, while also boasting the fifth-best running attack. However, when it became crunch time, Atlanta leaned on their passing game. To their detriment, they ignored their running attack, certain their MVP quarterback Matt Ryan could finish the job.

Had the Falcons’ coaching staff leveraged Machine Learning, they might have identified variables that might have provided better predictors of in-game performance (e.g. Tom Brady’s excellence in the 4th quarter, the Patriots late-game defensive tendencies), and avoided becoming overconfident in their passing game that wilted in the second half. As a standard operating practice, football front offices should apply Machine Learning to mine the large body of football game data to identify those “known unknowns” and “unknown unknowns” relationships buried in the data.

Trap: Anchoring Basis

An Anchoring Bias is a tendency to lock onto a single fact as a reference point for future decisions, even though that reference point may have no logical relevance to the decision at hand.

Corrective Action: The Falcons, having limited the Patriots to 215 yards plus two turnovers in the first half, had reason to feel good about their defense. However, one half does not make a football game. Buoyed by a first-half performance and a 21-3 halftime lead, the Falcons failed to adapt from what the first half showed them. The Patriots ran 23 plays over their final two first-half possessions, while Atlanta averaged 5 plays per drive in the first half despite scoring 14 offensive points (their third touchdown came via their defense). The length of the Patriots’ respective drives should have alerted Atlanta that New England was poised for greater offensive effectiveness in the second half.

Atlanta failed to identify, validate, vet and prioritize the most relevant in-game metrics to create a more effective second-half offensive game plan to keep the Patriots off the field and stymie their eventual rally.

Real-time advance metrics evaluation to monitor desired behaviors and outcomes is increasingly important to in-game success.

Figure 3: Human Design Making

Trap: Framing Effect

The Framing Effect is a cognitive bias in which a person’s decision is influenced by how the decision is presented. For example, humans tend to avoid risk when a positive frame is presented, but seek risks when a negative frame is presented.

Corrective Action: Into the 4th quarter, the Falcons’ banked on a passing game that achieved 12.3 yards per completed pass in the first half. Had they simply followed the math (and the inevitable “regression to the mean”), the Falcons would have run the ball in order to burn clock and denied the Patriots that one additional possession that ultimately decided the game.

NFL coaches may not be inclined to invest the time to carefully frame the in-game decisions or hypothesis. However had Atlanta – prior to the game – leveraged Design Thinking techniques to create in-game scoring tables and maps that would have charted the potential game flow, they could have referred to proven data, rather than rely on their gut instincts, to ensure they followed the most appropriate in-game decisions.

Trap: Risk Aversion

Risk aversion is the result of people’s preference for certainty over uncertainty and for minimizing the magnitude of the worst possible outcomes to which they are exposed.  For example, a “risk averse” investor prefers lower returns with known risks rather than higher returns with unknown risks.

Corrective Action: Again, in the 4th quarter, the Falcons turned risk aversion upside down. With nine pass attempts to four running attempts, they leaned on lower probability passing plays (passing the ball downfield is a lower probability of success than running) rather than the safely running the ball for first downs.

Falcons coaches did not take the time to understand the impact of Type I and Type II Errors of decisions under different in-game situations (e.g., kick-offs, punts, third-and-long, fourth down, 2 point conversions, overtime). They could have also applied Reinforcement Learning algorithms to create analytic models of different in-game scenarios that objectively balance rewards and risks around desired outcomes.

Trap: Sunk Costs

Sunk costs are retrospective costs that have already been incurred and cannot be recovered. Consequently, sunk costs should not factor into future decisions and should be ignored as if they never happened.

Corrective Action: Coaches in any sport rely heavily on tendencies and patterns. However every situation – even ones that mirror previous games – is unique. Making the same decision in a same situation in a different game does not guarantee the same outcome. It is a difficult habit for coaches to quit, but as the use of data science in sports increases and the use of in-game analytics grows, coaches must  ensure that sunk costs (i.e., previous in-game decisions that can’t be reversed) are identified and less influential to in-game decision making..

Trap: Endowment Effect

Endowment Effect is the hypothesis that people ascribe more value to things merely because they own them. We over-value what we have which leads to unrealistic expectations on price and terms (i.e., stock traders who become attached to a stock they own and consequently have trouble selling it).

Figure 4: The Endowment Effect

 

Corrective Action: The Falcons’ quarterback Matt Ryan was appearing in his first Super Bowl. The Patriots’ quarterback Tom Brady had a history of Super Bowl heroics. Did the Falcons coaches’ overconfidence in their quarterback cloud their judgment and reliance on key predictive performance variables (e.g., quarterback rating, yards after catch, effectiveness under pressure) to guide in-game decisions? Basing analytics models on flawed variables can lead to sub-optimal and even wrong decisions.

Trap: Confirmation Bias

Confirmation Bias is the tendency to interpret new evidence as confirmation of one’s existing beliefs or theories. Confirmation biases impact how people gather information, but they also influence how we interpret and recall information.

Corrective Action: Did the Falcons’ first-half performance confirm a belief that was proven false? In sports, momentum can lead to wild swings in outcome. Did excellence in executing their game plan during the first half, and resulting confirmation bias, lead the Falcons astray in the second half?

This is why sports teams are investing heavily in in-game predictive models with data scientists with expertise in other fields in order to avoid introducing personal biases into the analytic models. The partnership between data scientists, who focus on identifying the most predictive data and algorithms, and coaches, who are responsible for the in-game decisions, represents a new dynamic in the 21st century management of athletic teams.

Other Cognitive Biases of which to be aware include:

  • Herding (Safety in Numbers)
  • Mental Accounting
  • Reluctance to Own Mistakes (Revisionist History)
  • Confusing Luck with Skill
  • Regression to the Mean
  • Don’t Respect Randomness
  • Over-emphasize the Dramatic

Summary

Figure 5: Atlanta Falcons QB, Matt Ryan

Would a basic understanding of probabilities have saved the Atlanta Falcons from compounding a series of small but bad decisions into a painful loss? Maybe not, because understanding and acting are two different things. In the excitement of the moment, humans tend to forget what they’ve been taught and react instinctively.

Awareness is step one, but training is the ultimate solution. Decision makers need to be trained to “take a breath” and consult the models and numbers before rushing into a decision. Research shows that one of the keys to making “clear headed decisions” is to have a feeling of control. NASA and the Navy Seals accomplish that with repeated training [1].

Las Vegas is built on our inherent number crunching flaws and our inability to think with a clear head when the excitement, flashing lights, and pounding music is driving us to use our gut, not our brains, to make decisions.

Don’t think for a moment that those majestic casinos are built by giving away money to gamblers.

Sources:
Figure 2: Win Probability
Figure 4: The Endowment Effect
Figure 5: Photo courtesy of atlantafalcons.com
[1]The Secret to Handling Pressure Like Astronauts, Navy Seals, and Samurai
Additional Sources on Cognitive Biases:
20 Cognitive Biases That Screw Up Your Decisions
Cognitive Bias Codex

The post How the Eagles and Patriots Can Avoid a Championship Let Down: Play the Percentages appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/how-the-eagles-and-patriots-can-avoid-a-super-bowl-lii-let-down-play-the-percentages/feed/ 0
Bill Schmarzo’s Top 2017 Big Data, Data Science and IOT Blogs https://infocus.dellemc.com/william_schmarzo/bill-schmarzos-top-2017-big-data-data-science-and-iot-blogs/ https://infocus.dellemc.com/william_schmarzo/bill-schmarzos-top-2017-big-data-data-science-and-iot-blogs/#respond Wed, 24 Jan 2018 10:00:35 +0000 https://infocus.dellemc.com/?p=33781 To put us on the path for a successful and engaging 2018, here is a quick review of my top 10 blogs from 2017. #10. Is Data Science Really Science? Science works within systems of laws, such as the laws of physics, thermodynamics, mathematics, and many others. Scientists can apply these laws to understand why […]

The post Bill Schmarzo’s Top 2017 Big Data, Data Science and IOT Blogs appeared first on InFocus Blog | Dell EMC Services.

]]>
To put us on the path for a successful and engaging 2018, here is a quick review of my top 10 blogs from 2017.

#10. Is Data Science Really Science?

Science works within systems of laws, such as the laws of physics, thermodynamics, mathematics, and many others. Scientists can apply these laws to understand why certain actions lead to certain outcomes or why something is going to occur.

While there may never be “laws” that dictate human behaviors, in the world of IOT where organizations are melding analytics (machine learning and artificial intelligence) with physical products, we will see “data science” advancing beyond just “data” science. In IOT, the data science team must expand to include scientists and engineers from the physical sciences so that the team can understand and quantify the “why things happen” aspect of the analytic models. If not, the costs could be catastrophic.

Figure 1: Scientific Method Belief and Biases

 

Note: I’m adding Figure 1 to this blog to highlight the importance of the Scientific Method and understanding basic statistical techniques to ensure that one is building their analytics on unbiased data against unbiased hypotheses.

#9. Design Thinking: Future-proof Yourself from AI

While there is a high probability that machine learning and artificial intelligence will play an important role in whatever job you hold in the future, there is one way to “future-proof” your career…embrace the power of design thinking.

Design thinking is defined as human-centric design that builds upon the deep understanding of our users (e.g., their tendencies, propensities, inclinations, behaviors) to generate ideas, build prototypes, share what you’ve made, embrace the art of failure (i.e., fail fast but learn faster) and eventually put your innovative solution out into the world.  And fortunately for us humans (who really excel at human-centric things), there is a tight correlation between the design thinking and the machine learning (see Figure 2).

Figure 2: Design Thinking and Machine Learning Mapping

#8. 5 Steps to Building a Big Data Business Strategy

“The problem is that, in many cases, big data is not used well. Companies are better at collecting data – about their customers, about their products, about competitors – than analyzing that data and designing strategy around it.” “Companies Love Big Data but Lack the Strategy to Use It Effectively,” Harvard Business Review

Build a business strategy that incorporates big data. Build a business strategy that uncovers detailed customer, product, service and operational insights serve as the foundation for optimizing key operational processes, mitigating compliance and cyber-security risks, uncover new revenue opportunities, and create a more compelling, more differentiated customer or partner experience.

#7. What tomorrow’s business leaders need to know about Machine Learning.

Much of what comprises “Machine Learning” is really not all new. Many of the algorithms that fall into the Machine Learning category are analytic algorithms that have been around for decades. These include  clustering, association rules, and decisions trees. However, the detailed, granularity of the data, the wide variety of data sources, and a massive increase in computing power has re-invigorated many of these mature algorithms.

Machine learning is a type of applied artificial intelligence (AI) that provides computers with the ability to gain knowledge without being explicitly programmed. Machine learning focuses on the development of computer programs that can change when exposed to new data (see Figure 4). How can businesses, and business leaders, take advantage?

Figure 4: Supervised and Unsupervised Machine Learning Algorithms

#6. Is Blockchain the Ultimate Enabler of Data Monetization?

Blockchain is a data structure that maintains a digital ledger of transactions among a distributed network of entities.  Think of a “distributed ledger” that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without the need for a central authority or central clearinghouse (see Figure 5).

Figure 5: How to Use Blockchain Technology to Retain More Customers

Is blockchain the ultimate enabler of data and analytics monetization; creating marketplaces where companies, individuals and even smart entities (cars, trucks, building, airports, malls) can share/sell/trade/barter their data and analytic insights directly with others?

The impact that has on a company’s financials could be overwhelming, or devastating, depending upon what side of business model transformation you sit.

#5. Data is a New Currency

When you insert something, a new demand, into a circular flow,you create an economic concept called the Multiplier Effect. It is a concept that countries use to consider how to invest money and how that investment, by having it distribute though a supply chain, like the example above, will impact the economy of their country.

Multiplier Effect Definition: “An effect in economics in which an increase in spending produces an increase in national income and consumption greater than the initial amount spent.”

Figure 6: Economic Multiplier Effect

 

Data exhibits a Network Effect, where data can be used at the same time across multiple use cases thereby increasing its value to the organization. I would contend that this network effect is in fact the same thing principally as the Multiplier Effect.

#4. 5 Questions that Define Your Digital Transformation

I had the opportunity in 2017 to give a 10-minute keynote at DataWorks Summit 2017.  What sort of keynote could he give in just 10 minutes?  Ten minutes is not long for a keynote, and to be honest, I too struggled with what to say.

But after some brainstorming with my marketing experts, we came up with an idea:  Pose five questions that every organization needs to consider as they prepare themselves for digital transformation.  And while I didn’t have enough time in 10 minutes to answer those questions in a keynote, I certainly did in a blog!

Figure 7: 5 Questions that Frame Your Digital Transformation

 

You can also check out a video of my DataWorks Summit keynote presentation, complete with air guitar at the end so that I could embarrass my daughter (my presentation starts around the 39:30 mark)!

#3. Can Design Thinking Unleash Organizational Innovation?

Design Thinking, or human-centered design, is all about building a deep empathy with the people you’re designing for; generating tons of ideas; building a bunch of prototypes; sharing what you’ve made with the people you’re designing for; and eventually putting your innovative new solution out in the world (see Figure 8).

Figure 8: Stanford d.school Design Thinking Process

 

There is a good reason why Stanford’s d.school does not sit within one of their existing schools. Design thinking is used in almost all of Stanford’s schools including business, computer science, electrical, mechanical, and even healthcare.  Design thinking appears to be one of the secret sauces to Stanford’s success and cultivating the entrepreneurial spirit of its students and faculty (and neighbors, in my case).

#2. The Future Is Intelligent Apps

I have seen the future!  The future is a collision between big data (and data science) and application development that will yield a world of “intelligent apps.”

These “intelligent apps” combine customer, product, and operational insights (uncovered with predictive and prescriptive analytics) with modern application development tools and user-centric design to create a more compelling, more prescriptive user experience.

Intelligent apps will support or enable key user decisions, while continually learning from the user interactions to become even more relevant and valuable to those users.

The journey to building intelligent applications starts by understanding the decisions that key business constituents need to make in supporting their business and operational objectives.

Figure 9: Intelligent Application Stack

 

And my #1 blog of 2017 (drum roll please)…

#1. Difference between Big Data and Internet of Things

What are the differences between big data and IOT analytics? Big data analyzes large amounts of mostly human-generated data to support longer-duration use cases. IOT aggregates and compresses massive amounts of low latency / low duration / high volume machine-generated data coming from a wide variety of sensors to support real-time use cases.

I don’t believe that loading sensor data into a data lake and performing data science to create predictive analytic models qualifies as doing IOT analytics.  To me, that’s just big data (and potentially REALLY BIG DATA with all that sensor data).  In order for one to claim that they can deliver IOT analytic solutions requires big data (with data science and a data lake), but IOT analytics must also include:

  • Streaming data management with the ability to ingest, aggregate (e.g., mean, median, mode), and compress real-time data coming off a wide variety of sensor devices “at the edge” of the network.
  • Edge analytics that automatically analyzes real-time sensor data and renders real-time decisions (actions) at the edge of the network that optimizes operational performance (blade angle or yaw), or flags unusual performance or behaviors for immediate investigation (security breaches, fraud detection).

Sources:

Figure 1: Scientific Method Beliefs and Biases

Figure 4: Supervised and Unsupervised Machine Learning Algorithms

Figure 5: How to Use Blockchain Technology to Retain More Customers

 

The post Bill Schmarzo’s Top 2017 Big Data, Data Science and IOT Blogs appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/bill-schmarzos-top-2017-big-data-data-science-and-iot-blogs/feed/ 0
Predict ► Prescribe ► Prevent Analytics Value Cycle https://infocus.dellemc.com/william_schmarzo/predict-%e2%96%ba-prescribe-%e2%96%ba-prevent-analytics-value-cycle/ https://infocus.dellemc.com/william_schmarzo/predict-%e2%96%ba-prescribe-%e2%96%ba-prevent-analytics-value-cycle/#respond Thu, 18 Jan 2018 10:00:07 +0000 https://infocus.dellemc.com/?p=33740 Organizations looking for justification to move beyond legacy reporting should review this little ditty from the healthcare industry: The Institute of Medicine (IOM) estimates that the United States loses $750 billion annually to medical fraud, inefficiencies, and other siphons in the healthcare system[1]. The report identified six major areas of waste: unnecessary services ($210 billion […]

The post Predict ► Prescribe ► Prevent Analytics Value Cycle appeared first on InFocus Blog | Dell EMC Services.

]]>
Organizations looking for justification to move beyond legacy reporting should review this little ditty from the healthcare industry:

The Institute of Medicine (IOM) estimates that the United States loses $750 billion annually to medical fraud, inefficiencies, and other siphons in the healthcare system[1].

The report identified six major areas of waste: unnecessary services ($210 billion annually); inefficient delivery of care ($130 billion); excess administrative costs ($190 billion); inflated prices ($105 billion); prevention failures ($55 billion), and fraud ($75 billion). Adjusting for some overlap among the categories, the panel settled on an estimate of $750 billion (see Figure 1).

Figure 1: Preventable Healthcare Costs

“Best way to reduce operating and business costs and risks is to prevent them!”

What is Preventive Analytics?

“Preventive Analytics” typically refers to solutions that monitor systems to identify and prevent unplanned failures and downtime, and the associated costs. Yet, that narrow definition misses the larger opportunity to apply preventive analytics across a broad range of industry problems. Companies across numerous verticals, besieged by fraud, waste, abuse, foodborne illnesses or food poisoning, excessive and obsolete inventory, and product returns, would benefit from a greater commitment to preventive analytics:

 “A billion here and a billion there, and pretty soon we’re talking about real money!”

Predict ► Prescribe ► Prevent Analytics Value Cycle

To date, technologists have heavily focused on three core functions of analytics — Descriptive, Predictive, and Prescriptive. Surprisingly, little has been written about Preventive Analytics and how it complements the other three. As a refresher, these analytics grouping are defined as:

  • Descriptive Analytics: Use Business Intelligence and data warehousing to support management and operational reporting, and dashboards using aggregated data. Descriptive analytics answers the question: “What has happened?”
  • Predictive Analytics: Use statistical models to quantify cause-and-effect to predict what is likely to happen or how someone is likely to react (i.e., a consumer’s FICO credit score predicting likelihood to repay a loan). Predictive Analytics answers the question: “What is likely to happen?”
  • Prescriptive Analytics: Use optimization algorithms to prescribe actions to improve human decision-making around outcomes. Prescriptive Analytics answers the question: “What should we do?”

Now we need to add the category for Preventive Analytics.

  • Preventive Analytics: Use deep learning and machine learning to make preventive recommendations to avoid undesirable situations and outcomes. Preventive Analytics answers the question: “What actions should we take to prevent undesirable outcomes?”

Figure 2 lays out the Analytics Value Chain progression from Predictive to Prescriptive to Preventive analytics.

Figure 2: Predict to Prescribe to Prevent Analytics Value Cycle

See the blog “Artificial Intelligence is not ‘Fake’ Intelligence” for deeper definitions on these analytics classifications.

Preventive Analytics Examples

Let’s make preventive analytics come to life through some examples. Imagine a local government searching for opportunities to 1) reduce costs while 2) improving resident quality of life. Tackling fraud, waste, and abuse while simultaneously increasing citizen satisfaction and quality of life is a compelling win-win combination!

Another example is occurring within “Smart City” initiatives. Traffic data helps local agencies better predict traffic flows and patterns in order to decrease the rate of traffic jams and traffic accidents (see Figure 3).

Figure 3: From Predicting to Preventing Traffic Accidents

Below are more opportunities for city, state, and local governments to apply preventive analytics to decrease costs while improving citizen quality of life:

  • Reduce Hospital acquired infections and hospital readmissions.
  • Reduce graffiti and incidents of crime.
  • Reduce threats and losses from Cyber-attacks.
  • Reduce costs associated with unexpected equipment failures.
  • Reduce the impact from electricity, network, and utility outages.

Opportunities are only limited by the organization’s creative thinking. For the average business, the most significant preventive analytics opportunities will have bottom line impacts – preventing customer and employee attrition.

Biggest Financial Winner: Preventing Customer Attrition

It is estimated that new customer acquisition is 5 to 25 times more expensive than retaining an existing one. Consider the research done by Frederick Reichheld of Bain & Company (inventor of net promoter score) that observed customer acquisition in the banking industry.

Reichheld’s data reveals that increasing customer retention rates by 5 percent boosts profits by 25 percent to 95 percent.

Furthermore, reducing defections by just 5 percent generated 85 percent more profits in one bank’s branch system, 50 percent more in an insurance brokerage, and 30 percent more in an auto-service chain. MBNA America found that a 5 percent improvement in defection rates increases its average customer value by more than 125 percent[2] (see Figure 4).

Figure 4: “Customer Experience Management for Startups”

Here are customer attrition or churn rates for other industries[3]:

  • American credit card companies: approximately 20%
  • European cellular carriers: 20-38%
  • Software-as-a-Service companies: 5-7%
  • Retail banks: 20-25%
  • In 2003, the churn rate of daily newspaper subscriptions in the U.S. was 58%

The financial impact of preventing customer attrition is staggering!

See my blog “Optimizing the Customer Lifecycle with Customer Insights” for more details on leveraging Big Data to rewire your customer lifecycle management processes.

Unrealized Financial Winner: Preventing Employee Attrition

It isn’t just customer attrition that carries significant costs. Analysis by the Center for American Progress[4] reviewed 30 case studies published between 1992 and 2007 that provided cost estimates from employee turnover. The analysis found that businesses spend approximately 20 percent of an employee’s annual salary to replace that worker.  That cost is even higher for high-skilled and senior management (see Figure 5).

Figure 5: Hidden costs associated with Employee Attrition

Yes, it is financially smart for organizations to use Preventive Analytics to identify, understand, and act on at-risk employees that can both decrease employee acquisition costs while improving employee workplace satisfaction.

Preventive Analytics Summary

Preventive Analytics – like Descriptive, Predictive and Prescriptive Analytics – plays an important role in exploiting the power of data and analytics to drive digital transformation and create an intelligent enterprise (see Figure 6).

Figure 6: Creating the “Intelligent Enterprise”

The “Predict ► Prescribe ► Prevent” Analytics Value Cycle has potential to dramatically reduce or eliminate costs associated with fraud, waste, and abuse while simultaneously increasing customer, employee, and citizen satisfaction and quality of life.

Hard to beat those benefits!

[1]How the U.S. Health-Care System Wastes $750 Billion Annually

[2]“Zero Defections:Quality Comes to Services”

[3]“How to Calculate (and Lower!) Your Customer Churn Rate”

[4]“There Are Significant Business Costs to Replacing Employees”

 

The post Predict ► Prescribe ► Prevent Analytics Value Cycle appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/predict-%e2%96%ba-prescribe-%e2%96%ba-prevent-analytics-value-cycle/feed/ 0
Big Data Isn’t a Thing; Big Data is a State of Mind https://infocus.dellemc.com/william_schmarzo/big-data-isnt-a-thing-big-data-is-a-state-of-mind/ https://infocus.dellemc.com/william_schmarzo/big-data-isnt-a-thing-big-data-is-a-state-of-mind/#respond Wed, 17 Jan 2018 10:00:08 +0000 https://infocus.dellemc.com/?p=33449 “Big Data is dead.” “Big Data is passé.” “We no longer need Big Data; we need Machine Learning now.” As we end 2017 and look forward to big (data) things in 2018, the most important lessons of 2017 – in fact, maybe the most important lesson going forward – is that Big Data is NOT […]

The post Big Data Isn’t a Thing; Big Data is a State of Mind appeared first on InFocus Blog | Dell EMC Services.

]]>
“Big Data is dead.” “Big Data is passé.”

“We no longer need Big Data; we need Machine Learning now.”

As we end 2017 and look forward to big (data) things in 2018, the most important lessons of 2017 – in fact, maybe the most important lesson going forward – is that Big Data is NOT a thing. Big Data isn’t about the volume, variety or velocity of data any more than car racing is about the gasoline. Big Data is a state of mind. Big Data is about becoming more effective at leveraging data and analytics to power your business models (see Figure 1).

Figure 1: Becoming More Effective at Leveraging Big Data to Power your Business

 

Big Data is a State of Mind

Big Data is about improving an organization’s ability to leverage data and analytics to power their business models; to optimize key business and operational use cases; reduce security and compliance risk; to uncover new revenue opportunities; and create more compelling, differentiated customer engagements. The technical components – building blocks – of a “big data state of mind” include:

  • Data: Ability to collect and aggregate detailed data from a wide variety of data sources including structured (tables, relational databases), semi-structured (logs files, XML, JSON) and unstructured data sources (text, video, audio, images).
  • Analytics: Ability to leverage advanced analytics (data science, deep learning, machine learning, artificial intelligence) to uncover customer, product, service, operational, and market insights.

These are important technology building blocks, but by themselves, they provide NO business or financial value. These are necessary but not sufficient capabilities for driving the most important aspect of Big Data – Data Monetization!

Big Data is About Data Monetization

Big Data is about exploiting the unique characteristics of data and analytics as digital assets to create new sources of economic value for the organization. Most assets exhibit a one-to-one transactional relationship. For example, the quantifiable value of a dollar as an asset is finite – it can only be used to buy one item or service at a time. Same with human assets, as a person can only do one job at a time. But measuring the value of data as an asset is not constrained by those transactional limitations. In fact, data is an unusual asset as it exhibits an Economic Multiplier Effect, whereby it never depletes or wears out and can be used simultaneously across multiple use cases at near zero margin cost. This makes data a powerful asset in which to invest (see Figure 2).

Figure 2: Economic Multiplier Effect

 

Understanding the economic characteristics of data and analytics as digital assets is the first step in monetizing your data via predictive, prescriptive and preventative analytics.

See the blog series at “Determining Economic Predicted Value of Data (EPvD) Series” for more insights about how organizations can exploit the unique economic characteristics of data and analytics as digital assets.

Big Data is a Business Discipline

Leading organizations that embrace digital transformation see data and analytics as a business discipline, not just another IT task. And tomorrow’s business leaders must become experts at leveraging data and analytics to power their business models. The most valuable companies today (from a market cap perspective) are those organizations that are mastering the use of Big Data (with artificial intelligence, machine learning, deep learning) to derive and drive new sources of value (see Figure 3).

Figure 3: Most Valuable Companies in the World

 

At the University of San Francisco, I teach the “Big Data MBA” where I am educating tomorrow’s business leaders how to embrace data and analytics as the next modern business discipline. A Master of Business Administration (MBA) provides theoretical and practical training to teach business leaders important business disciplines such as accounting, finance, operations management and marketing. We want to treat analytics as a similar business discipline.

Data Science is the Data Monetization Engine

Data Science is used to identify the variables and metrics that might be better predictors of business and operational performance, and to quantify cause-and-effect in order to predict likely actions and outcomes; prescribe corrective actions or recommendations; prevent costly outcomes; and continuously learn and adapt as the environment changes.

To do that, data scientists need to learn a wide variety of statistical, data mining, deep learning, machine learning, and artificial intelligence techniques and tools (see Figure 4).

Figure 4: Examples of Advanced Analytics

 

Data monetization requires close collaboration with business stakeholders who own the important responsibility of setting the business and analytics strategy. These stakeholders also unambiguously define the hypotheses to be tested, and articulate how the resulting analytic outcomes will be operationalized and monetized. The key to enlisting business leadership is to turn them into “Citizens of Data Science” and to teach them to “Think Like a Data Scientist.”

This includes:

  • Use case identification, validation and prioritization that begins with an end in mind.
  • Develop personas for each key business stakeholder and constituent to understand their responsibilities, key decisions, and impediments to success.
  • Brainstorming variables and metrics that might be better predictors of performance.
  • Creating actionable, prescriptive analytic insights and recommendations that drive measurably better operational and business decisions.
  • Articulating how the analytic outcomes will be operationalize or put into action.

Check out the infographic “Think Like A Data Scientist” for more information. It also includes a workbook that guides the “thinking like a data scientist” process.

A Big Data State of Mind

One of my favorite articles (So, What Is Machine Learning Anyways?) does a great job of summarizing the important relationship between Big Data and Machine Learning:

  • Big Data started when the Internet created a treasure trove of website and search data. Today that data has been augmented by social media, mobile, wearables, IOT, and even microphones and cameras that are constantly collecting information.
  • With so much data readily available, machine learning provides a method to organize that data into meaningful patterns. Machine learning sorts through those troves of data to discern patterns and predict new ones.
  • Machine learning plays a key role in the development of artificial intelligence. Artificial intelligence refers to a machine’s ability to perform intelligent tasks, whereas machine learning refers to the automated process by which machines weed out meaningful patterns in data. Without machine learning, artificial intelligence as wouldn’t be possible.

Though there are many critical building blocks associated with Big Data, the leading organizations are quickly realizing the Big Data isn’t a thing.

Big Data is a mindset about transforming business leadership to become more effective at leveraging data and analytics to power the organization’s business models (see Figure 5).

Figure 5: Leveraging Data and Analytics to Create an Intelligent Enterprise

 

So, how effective is your organization at leveraging #BigData and #MachineLearning to power your business models and create an intelligent organization?

The post Big Data Isn’t a Thing; Big Data is a State of Mind appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/big-data-isnt-a-thing-big-data-is-a-state-of-mind/feed/ 0
Lessons in Becoming an Effective Data Scientist https://infocus.dellemc.com/william_schmarzo/lessons-in-becoming-an-effective-data-scientist/ https://infocus.dellemc.com/william_schmarzo/lessons-in-becoming-an-effective-data-scientist/#respond Mon, 15 Jan 2018 10:00:54 +0000 https://infocus.dellemc.com/?p=33456 I was recently a guest lecturer at the University of California Berkeley Extension in San Francisco. On a lovely Saturday afternoon, the classroom was crowded with students of all ages learning the tools of the modern economy. The craftspeople of the “Analytics Revolution” were busy learning new skills and tools that will prepare them for […]

The post Lessons in Becoming an Effective Data Scientist appeared first on InFocus Blog | Dell EMC Services.

]]>
I was recently a guest lecturer at the University of California Berkeley Extension in San Francisco. On a lovely Saturday afternoon, the classroom was crowded with students of all ages learning the tools of the modern economy. The craftspeople of the “Analytics Revolution” were busy learning new skills and tools that will prepare them for this Brave New World of analytics. I was blown away by their dedication!

As we teach the next generation, it’s important that we focus more on capabilities and less so on skills. What I mean is “learning TensorFlow” isn’t nearly as important as “learning how to learn TensorFlow.”

We need to make sure that we teach concepts and methodologies along with the tools. We should teach the “What” and “Why” as well as the “How” so we don’t put our students in a situation where they “can’t see the forest for the trees.”

This brings me to a recent article “What IBM Looks for in a Data Scientist,” which outlines what IBM looks for in a Data Scientist. The list of skills is very useful, especially for someone pursuing such a career:

  1. Training as a scientist with an MS or PhD.
  2. Expertise in machine learning and statistics with an emphasis on decision optimization.
  3. Expertise in R, Python or Scala.
  4. Ability to transform and manage large data sets.
  5. Proven ability to apply the skills above to real-world business problems.
  6. Ability to evaluate model performance and tune it accordingly.

Unfortunately, this is a tactical list, not a strategic list. In fact, some of the points are too granular and too focused on “how” versus “why.”  For example, on point #3, it’s more important to know how to program than it is to know a specific language. It’s more important to learn the concepts and approach to effectively program than it is to learn the tools themselves. The minute you think you’re expert at R or Python or Scala, along comes Julia. It’s important to develop transferable skills rather having to re-educate yourself each time a new tool arrives.

In a world driven by the rapid introduction and adoption of open source tools and frameworks (like TensorFlow for machine learning), expertise in a tool is fleeting.  However, mastery of the concepts and approaches for which those tools are used is critical because being a data scientist is more than just a bag of skills. The best data scientists are about outcomes and results.

Data Science DEPP Engagement Process

Our data science team at Dell EMC uses a methodology called DEPP that guides the collaboration with the business stakeholders through the following stages:

  • Descriptive Analytics to clearly understand what happened and how the business is measuring success.
  • Exploratory Analytics to understand the financial, business and operational drivers behind what happened.
  • Predictive Analytics to transition the business stakeholder mindset to focus on predicting what is likely to happen.
  • Prescriptive Analytics to identify actions or recommendations based upon the measures of business success and the Predictive Analytics.

The DEPP Methodology is an agile and iterative process that continues to evolve in scope and complexity as our clients mature in their advanced analytics capabilities (see Figure 1).

Figure 1: Dell EMC DEPP Data Science Collaborative Methodology

Importance of Humility

The first skill that I look for when engaging with or hiring a data scientist is humility. I look for the ability to listen and engage with others who may not seem as smart as them. And as you can see from our DEPP methodology, humility is the key to driving collaboration between the business stakeholders (who will never understand data science to the level that a data scientist do) and the data scientist (who will never understand the business to the level that the business stakeholders do).

Humility is critical to our DEPP methodology because you can’t learn what’s important for the business if you aren’t willing to acknowledge that you might not know everything.

Humility is one of the secrets to effective collaboration. Nowhere does the importance of the business/data science collaboration play a more important role than in hypothesis development.

A hypothesis is a formal statement that presents the expected relationship between an independent and dependent variable. (Creswell,1994)

If you get the hypothesis and the metrics against which you are going to measure success wrong, everything the data scientist does to support that hypothesis doesn’t matter. In fact, if you get the hypothesis and the metrics against which you are going to measure wrong, not only are you likely to achieve suboptimal results, but you could actually achieve the wrong results altogether.

For example, in the healthcare industry, we are seeing the disastrous effects of the wrong metrics (see the blog “Unintended Consequences of the Wrong Measures” for more details). Instead of using “Patient Satisfaction” as the metric against which to measure the doctor and hospital effectiveness (which is leading to unintended consequences), the healthcare industry may benefit from a more holistic metric against which to measure success. One example is a “Quality and Effectiveness of Care” combined with a “Readmissions” score and “Hospital Acquired Infections” score.

Being off in your hypothesis by just one degree can be disastrous. For example, if you are flying San Francisco to Washington, D.C. and were off by a mere one degree upon takeoff, you’d end up on the other side of Baltimore, 42.6 miles away (“Impact of A Mere One-Degree Difference”).

Figure 2: Ramifications of being off 1 degree

 

Get the hypothesis wrong, even by a one degree, and the results could be wrong or even disastrous (if you have tickets to watch the Washington Redskins play football and not the Baltimore Ravens).

Type I / Type II Errors

Being humble also means to concede when you may be wrong, particularly with analytic models that may not always deliver the right predictions or outcomes. In that case, a solid understanding of the business or organizational costs of Type I (False Positive) and Type II (False Negative) errors is important. To understand the business and organizational ramifications of such errors requires close collaboration with the business stakeholders (see Figure 3).

Figure 3: Understanding Type I Errors and Type II Errors

See the blog “Understanding Type I and Type II Errors” for more details.

Summary

In my classes, I focus on the “What” and “Why” versus spending too much time on the “How”. I want my students to have a framework that enables them to understand how the different technologies, techniques and tools can be more effectively used.

I’m not teaching my students data science, I’m teaching them how to learn data science. It is an important distinction that can be humbling, but results in a more detailed-oriented student that wishes not only to become a data scientist, but become an effective data scientist. As teachers, it is important that we know the difference.

The post Lessons in Becoming an Effective Data Scientist appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/lessons-in-becoming-an-effective-data-scientist/feed/ 0
Avoiding the IOT ‘Twister’ Business Strategy https://infocus.dellemc.com/william_schmarzo/avoiding-the-iot-twister-business-strategy/ https://infocus.dellemc.com/william_schmarzo/avoiding-the-iot-twister-business-strategy/#respond Wed, 10 Jan 2018 10:00:05 +0000 https://infocus.dellemc.com/?p=33676 Most organizations’ ‪IOT Strategy look like a game of ‪‘Twister’ with progress across important IOT capabilities such as architecture, technology, data, ‪analytics and governance; variables comprising a series of random investments and decisions. There is something very different about the Internet of Things (IOT) craze versus other technology-induced bacchanalias. The IOT craze appears to be […]

The post Avoiding the IOT ‘Twister’ Business Strategy appeared first on InFocus Blog | Dell EMC Services.

]]>

Most organizations’ IOT Strategy look like a game of ‘Twister’ with progress across important IOT capabilities such as architecture, technology, data, analytics and governance; variables comprising a series of random investments and decisions.

There is something very different about the Internet of Things (IOT) craze versus other technology-induced bacchanalias. The IOT craze appears to be driven by the business stakeholders versus the Information Technology (IT) organization.

Could it be said then that it’s the tangible financial Return on Investment (ROI) that drives the IOT business strategy?

There are many IOT vendors who provide IOT “solutions” to address one particular operational need (e.g., predictive equipment maintenance, energy optimization, load balancing, network optimization) that in many cases provides a clear and tangible ROI.

This makes the purchase decision relatively easy for the business stakeholders because these decisions pay for themselves in 24 months or less (see Figure 2).

Figure 2: IOT Applications by Industry

 

However many of these IOT solutions are “point solutions” – they don’t necessarily integrate into the existing corporate IT architecture and support organizations – which makes them easier for the business to buy and deploy because the “solution” doesn’t involve IT. And the standalone nature of these IOT solutions introduce all sorts of challenges for the CIO and IT organization, including:

  • Lack of integration into a master IOT architecture
  • Difficulty scaling across the entire organization
  • Difficulty operationalizing the point solution

Let’s look at each of these IT challenges in more detail.

IOT Challenge #1: Integration Challenge

Integration of IOT vendor point solutions into a more holistic IOT architecture is not trivial. Many of these point IOT solutions are offered on a cloud and a “cloud” is not a “cloud.” The interoperability between clouds is not the concern of the IOT point solution provider because that would just “slow down the sales cycle.”

Consequently, it is left to the IT organization to try to integrate this point IOT solution – after the fact – into the organization’s more holistic IOT architecture (see Figure 3).

Figure 3: IOT Reference Architecture

 

IT must avoid islands of IOT architectures that inhibit the ability to share the IOT data and resulting analytics across other business and operational use cases. And if we thought data silos were a challenge, wait until we have to address “architecture silos”!

IOT Challenge #2: Scalability Challenge

Business stakeholders make isolated IOT product decisions because of the compelling ROI from the perspective of that particular business unit. However, the IOT solution vendor is motivated to sell the solution to other business units and that’s when the scalability problems start because many IOT solutions don’t scale.

Scalability” refers to the ability to expand without running into obstacles that increase the per-unit costs of doing business, the ability to increase production inputs by a certain percentage, and get an equal percentage increase in output[1].

However, most organizations want more than just “linear scalability”; these organizations want to leverage “economies of scale” to drive down incremental or marginal costs.

Economies of scale arise when there is an inverse relationship between the quantity produced and per-unit fixed costs; i.e. the greater the quantity of goods produced, the lower the per-unit fixed cost because costs are spread over a larger number of goods. Economies of scale reduce variable costs per unit via operational efficiencies and synergies.[2] (See Figure 4).

Figure 4; Economies of Scale

For example, if the first instance of something costs $100 to implement, then economics of scale means that the second instance is cheaper to implement ($50), and the third instance even cheaper ($25), and so on. Big organizations leverage economies of scale to accelerate their profitability, but if organizations cannot achieve economies of scale due to scalability problems, then the opposite is likely to happen – the overhead associated with each subsequent implementation increases in a debilitating, non-linear fashion.

IOT Challenge #3: Operationalization Difficulties

Operationalization is everything that happens after the initial implementation around the ongoing management of the IOT point solution. There is significant work that needs to be done after the initial IOT solution implementation to ensure that the IOT solution – and its supporting architecture, technology, data processing and analytics – effortlessly performs for the business unit in a production environment (see Figure 5).

Figure 4: IOT Operationalization Considerations

Summary: Delivering on the Promise of IOT Monetization

The result of purchasing IOT point solutions and its 3 IOT challenges is an IOT business strategy that looks like a game of “Twister,” with one hand at the Optimization stage for one IOT capability and other hand at the Monitoring stage and etc.

If a chain is only as strong as its weakest link, then one’s IOT business strategy is only as strong as its weakest developed IOT capability.

Figure 6: Haphazard IOT Business Strategy Development

 

Ultimately, the goal of any comprehensive IOT Business Strategy should be to couple the new sources of IOT data with advanced analytics to power the organization’s most important business and operational use cases (see the blog “Monetizing the Internet of Things (IOT)” for more details about driving IOT monetization).

Dell EMC Consulting created the “IOT Technology Advisory Service” to help organizations manage “IOT’s 3 Challenges.” The Service positions our Dell EMC customers on the path to IOT Monetization by:

  1. Assessing the enterprise’s current IOT maturity
  2. Performing a Gap Analysis and identifying Business or Technology use case
  3. Recommending Architecture and IOT Roadmap, and
  4. Establishing next steps, implementations plan, and timeline

Figure 7: Three Challenges

 

In conclusion, we can help our clients to avoid the life-or-death “Tug-of-War” struggle that can be introduced by compelling and highly desirable IOT point solutions.

Sources:

Figure 2: IOT Application by Industry | “The 10 most popular Internet of Things applications right now” 

[1] “What Is Linear Scalability?

[2] Economies of Scale

 

The post Avoiding the IOT ‘Twister’ Business Strategy appeared first on InFocus Blog | Dell EMC Services.

]]>
https://infocus.dellemc.com/william_schmarzo/avoiding-the-iot-twister-business-strategy/feed/ 0