Big Data

Driving Competitive Advantage through Data Monetization

Matt Maccaux By Matt Maccaux Global Big Data Practice Lead April 24, 2018

Data monetization is a much-discussed topic in the business world these days – the holy grail of enterprise analytics initiatives and an enabler for digital transformation and market differentiation. The value is undeniable yet the journey to get there can be tricky and complex to navigate.

So, how do organizations successfully monetize their data to drive competitive advantage? First and foremost, understand that it is not all about the technology, nor should it be a technology-led discussion.

In this blog, I’ll focus on 3 essential and interlocking elements:

  • People – the right skilled resources working on cross-functional, integrated teams
  • Process – collaborative and agile ‘DataOps’ processes
  • Technology – modern infrastructure and tools

Building a High Performing, Cross-Functional Team

The monetization of data relies on the people performing the analytics and building the systems that operationalize them. There are several different groups that work to make that happen:

  • Data Scientists – these are the people get all the press. They experiment by starting with open ended questions such as, how can the organization increase share of wallet? Or in the case of a public school system, how can we help underserved students learn and perform better?  These folks want it all – all the data, all the resources, bring their own data, bring their own tools, and more. And you need to keep them happy because they are in high demand.
  • Data Analysts – these are the people that do more traditional analytics using traditional tools such as SAS, Tableau, etc. These folks make up the biggest part of the analytical community and provide the operational reporting that keeps the organization on track. As the tools get more sophisticated however (e.g., SAS Viya), these users are going to start looking a lot more like data scientists.
  • Engineers and Operations – these folks take the outputs from the data scientists and analysts and put the systems, infrastructure, and applications in place to operationalize the models and reports. These users are software developers and system operators and don’t need access to sensitive data, but will need to work in production-like environments to ensure the models and reports function as expected. For example, they would orchestrate taking a completed model with streaming data and feeding the output via APIs to be consumed by applications.
  • Data Stewards – these are the people in the business that understand the context for the data – where it came from, how it changes, the security and access requirements/restrictions, and most importantly – what the data means.

Successful organizations have these groups working together on projects from start to finish so that everyone is working towards the same end goal in an integrated, streamlined fashion. Most data science projects fail because organizations follow the traditional process of ‘check the code in and move on’ and the process breaks down when the engineer or operator can’t apply the model to an application. For example, if a data scientist builds a model against static or batch data however the applications that process the information use streaming data, it can change the way the model works and cause costly delays.

Optimizing Processes for Data Monetization at Speed and Scale

How much time does your organization spend deploying infrastructure? How about tearing it down? Are you cloud-like in the way you do things?

Since we are talking about data monetization, digital transformation is at the heart of the solution and it isn’t just data. The process to provision data, build models, publish them, build applications, and provide new experiences for your users/customers requires a new approach to application development and at the heart of it is DevOps, or in this case DataOps. How repeatable is your process? Organizations should be thinking about this in the same terms as you do modern application development.

You also need to make sure you’re targeting the right high impact use cases.  As organizations gain maturity in data science and analytics, there will be no shortage of use cases and ideas. It’s imperative that organizations follow a process upfront to identify and prioritize business use cases that will have the most benefit with the lowest barrier to implement.  The worst case scenario is that you spend the effort to operationalize the analytics and it doesn’t have a measurable effect on the business.

Another critical area for process excellence is enabling your data scientists to be productive day in and day out.  Success hangs on their ability to find and apply patterns in your data – so get them what they need fast and get out of their way!  This means provisioning analytics environments in minutes, not months, and having a safe space for them to test out ideas without blowing anything up.  To achieve this from a process standpoint, organizations need to automate the end-to-end process and proactively identify and eliminate any gaps or delays in the discovery and monetization lifecycle.

And Then There’s the Technology…

Rather than dive into specific technologies, let’s talk about some of the guiding principles that the technology should be architected for:

  1. Data that is made available easily and freely, but in a controlled and secured manner – that means not creating physical copies of the data for each and every user
  2. Isolated “sandbox” environments so users can freely experiment with the data using tools of their choice, without the risk of corrupting the “master” or production data
  3. Environment elasticity that scales up and down based on the user workload and analytical requirements
  4. Compliance with all governance, security, and regulatory rules and processes
  5. Cloud-ready, be it private, public or hybrid cloud deployments

IT teams simply cannot keep up with the demand from analytical users following the old way of provisioning static environments on bare metal servers with a limited set of tools. It is equally challenging for IT to keep up with the sheer volume of new and updated tools and libraries, especially in the machine learning, deep learning, and AI space. We recommend that IT focus on providing the core infrastructure as-a-Service and giving users the freedom to bring their own tools so that they can innovate at the pace the business requires.

At Dell EMC, we offer a comprehensive portfolio of Big Data & IoT Consulting services from strategy through to implementation and beyond and help bridge the people, process, and technology for organizations to accelerate the time to value of their data monetization initiatives.

Matt Maccaux

About Matt Maccaux


Global Big Data Practice Lead

Matt has been with Dell Technologies since 2012, working for VMware, Pivotal, EMC, and now Dell EMC. He is the Global Big Data Practice lead working on Big Data strategy and supporting go to market activities within the Consulting Practice.

You’ll find Matt working with customers on their Big Data journey, speaking at tradeshows, and working with partners in the ecosystem to provide the best solutions for our customers.

Read More

Share this Story
Join the Conversation

Our Team becomes stronger with every person who adds to the conversation. So please join the conversation. Comment on our posts and share!

Leave a Reply

Your email address will not be published. Required fields are marked *