Название | Mind+Machine |
---|---|
Автор произведения | Vollenweider Marc |
Жанр | Зарубежная образовательная литература |
Серия | |
Издательство | Зарубежная образовательная литература |
Год выпуска | 0 |
isbn | 9781119302971 |
We've added a collection of use cases throughout this book to help give you insight into the real-world applications of what you're learning. They all follow the same format to help you quickly find the information of greatest interest to you.
Context: A brief outline of where the use case comes from: industry, business function, and geography
Business Challenge: What the solution needed to achieve for the client(s)
Solution: An illustration of the solution or processes used to create that solution
Approach: Details on the steps involved in creating the solutions along with the mind+machine intensity diagram, illustrating the change in the balance between human effort and automation at key stages during the implementation of the solution
Analytics Challenges: The key issues to be solved along with an illustration of the relative complexity of the mind+machine aspects applied in solving the case
Benefits: The positive impact on productivity, time to market, and quality, and the new capabilities stemming from the solution
Implementation: The key achievements and the investment and/or effort required to make the solution a reality (development, implementation, and maintenance, as applicable), illustrated where possible
I wanted to include some of the more exciting projects currently under development to show the possibilities of analytics. In these cases, some of the productivity gain and investment metrics are estimates and are labeled (E).
The big data hype has its origin in three factors: the appearance of new data types or sources, such as social media; the increasing availability of connected devices, from mobile phones to machine sensors; and the evolution of ways to analyze large data sets in short periods of time. The sense of possibility led to a proliferation of use cases. We cannot say how many of these untested use cases will survive. Ultimately, the question is not what can be done, but what actually delivers value to the end user.
Gartner predicts that 60 percent of big data initiatives will fail in 2017,1 and Wikibon, an open-source research firm, maintains that the average ROI for big data projects is currently only about 55 cents on the dollar spent instead of the expected $3 to $4.2 The latter assessment wasn't made by CFOs, but came directly from practitioners, who saw a “lack of compelling need” for big data in those use cases as a reason for the low returns. However, our experience is that CFOs are increasingly asking about the viability of such analytics.
For large companies, the investment in big data infrastructure and expertise can easily run into the tens of millions of dollars. It would seem obvious that prior to any such investment, the company would want to fully investigate the need, and yet in the 2012 BRITE/NYAMA “Marketing Measurement in Transition” study, 57 percent of companies self-reported that their marketing budgets were not based on ROI analysis.3
Measuring the ROI of analytics use cases is unfortunately not as easy as it sounds. This is especially true where companies have invested in infrastructure such as central data warehouses, software licenses, and data scientist teams. Properly calculating the desired impact at the use case level requires the corresponding governance and control, which is rare at this stage. In a series of initial interviews with companies that went on to become Evalueserve clients, seven areas were found to be lacking – in some cases, almost completely:
1. Governance structure for the data and use case ownership
2. Accountability for individual use cases, portfolio management, and associated economics
3. Clear definition of analytics use cases
4. Objectives and intended end user benefits for each use case
5. Tracking the actual results against the targets
6. Knowledge management allowing the efficient reuse of prior work
7. Audit trails for the people, timing, actions, and results regarding the code, data, and findings
That said, examples of excellent and highly focused big data use case management do exist. The use case Cross-Sell Analytics: Opportunity Dashboard shows solid accountability. The campaign management function of the bank continually measures the ROI of campaigns end to end, and has built a focused factory for a portfolio of such analytics.
An example of a much weaker big data use case was recently proposed to me by a US start-up engaged in human resources (HR) analytics. The example illustrates some of the fundamental issues with the current hype. An ex-consultant and an ex-national security agent suggested using a derivative of software developed for the surveillance field for recruiting analytics. Based on the previous five to 10 years of job applications – the curriculum vitae (CV) or resume and cover letter – and the performance data of the corresponding employees, a black-box algorithm would build a performance prediction model for new job applicants. The software would deliver hire/no hire suggestions after receiving the data of the new applications.
We rejected the proposal for two reasons: the obvious issue of data privacy and the expected ROI. Having done thousands of interviews, I have a very simple view of resumes. They deliver basic information that's been heavily fine-tuned by more or less competent coaching, and they essentially hide the candidate's true personality. I would argue that the predictive value of CVs has decreased over the past 20 years. Cultural bias in CV massaging is another issue. Human contact – preferably eye contact – is still the only way to cut through these walls of disguise.
The black-box algorithm would therefore have a very severe information shortage, making it not just inefficient, but actually in danger of producing a negative ROI in the form of many wrong decisions. When challenged on this, the start-up's salesperson stated that a “human filter” would have to be applied to find the false positives. Since a black-box algorithm is involved, there is no way of knowing how the software's conclusion was reached, so the analysis would need to be redone 100 percent, reducing the ROI still further.
It was also interesting to see that this use case was being sold as big data. It's a classic example of riding the wave of popularity of a term. Even under the most aggressive scenarios, our human resources performance data is not more than 300 to 400 megabytes, which hardly constitutes big data. Always be wary of excessive marketing language and the corresponding promises!
These are just two isolated use cases, which is certainly not enough to convince anyone trained in statistics, including myself. Therefore, it is necessary to look at how relevant big data analytics is in the overall demographics of analytics. To the best of my knowledge, this is not something that has ever been attempted in a study.
At first, it's necessary to count the number of analytics use cases and put them into various buckets to create a demographic map of analytics (Figure I.2). One cautionary note: counting
1
“Gartner Predicts 2015: Big Data Challenges Move from Technology to the Organization,” Gartner Inc., 2014, https://www.gartner.com/doc/2928217/predicts-big-data-challenges.
2
Jeff Kelly, “Enterprises Struggling to Derive Maximum Value from Big Data,” Wikibon, 2013, http://wikibon.org/wiki/v/Enterprises_Struggling_to_Derive_Maximum_Value_from_Big_Data.
3
Columbia Business School's annual BRITE conference, BRITE–NYAMA Marketing Measurement in Transition Study, Columbia Business School, 2012, www8.gsb.columbia.edu/newsroom/newsn/1988/study-finds-marketers-struggle-with-the-big-data-and-digital-tools-of-today.