Data’s Backward-Looking Lens – Usefulness Versus Reliance

In 2011, ailing US retailer JCPenney recruited Ron Johnson as CEO, the former president of Apple’s retail operations, who is credited with pioneering the concept of the Apple Store.

Johnson arrived at JCPenney intent on reinventing the brand and boosting sales. He implemented a broad refurbishment program, creating a steadier pricing system by removing coupons and clearance items and presenting the stores as fashionable destination boutiques within malls. Johnson lasted less than two years at the company as JCPenney’s sales collapsed: same-store sales decreased 25% – a reduction in sales of $4.3 billion – and the group ended with close to $1 billion in net annual losses.

Two of Johnson’s missteps for JCPenney were noteworthy:

  • Assumptions made – Many: Johnson assumed that what worked for Apple Stores would be a successful recipe to follow for large department stores, even those with price sensitive customers who typically perceive value through promotions (discounts and coupons).
  • Data used – None: Instead of testing the ideas with certain stores, gathering data and insights, prototyping and iterating, the new CEO assumed that the full overhaul of all department stores would work. Johnson didn’t validate his assumptions before executing the extensive and expensive store revamps. Likewise, Johnson assumed the changes in pricing structure would achieve significant growth in sales and profitability without testing these ideas as part of the decision-making process.

The strategy was quite bold (or dangerous): a simple approach of focusing on a fixed, predetermined endpoint. The decision-making was equally binary: full reliance on untested assumptions with no search for additional information, data, or insights to inform this linear process.

Apple’s culture and strategy is one which typically does not test prior to launches. JCPenney has an entirely different proposition and customer base. The new CEO was supported by activist investor Bill Ackman, and their ideas were innovative – they believed that reinventing JCPenney in this way would be compelling. Of course, making assumptions is a normal part of strategy. The shortcoming here was relying entirely on a risky and expensive strategic plan which assumed a singular possible outcome. The plan of ending markdowns and turning stores into destinations could have benefited from groundwork and testing, with insights evaluated from sample data to inform the decision.

Although data has limitations, these do not equate ignoring it outright. Data can be powerful when used to test assumptions. Analysis can transform raw data into insights to inform emergent decision-making. These insights and feedback loops offer a clue to a multitude of possible futures, but data’s usefulness does not mean we should rely exclusively on it.

We crystallize below six key takeaways on data in our liminal and unpredictable world which offers us a palette of shades between “reliance,” “limitations,” and “usefulness”:

Facts versus assumptions

A key benefit of data lies in its ability to provide empirical evidence to substantiate subjective opinions and assumptions. Testing tacit and explicit assumptions can provide validation, and while facts are better than assumptions, they still only provide knowledge of the present state (versus the future). Validating assumptions about the past or present is a continuous loop. In an updating world, assumptions need to be reevaluated and continually tested.

Big data

Massive amounts of networked datasets can provide ever-deeper insights through pattern recognition at scale. Machine learning enables us to discover non-intuitive dynamic connections, while natural language processing is effective for unstructured extraction. We can gain new learnings and insights from the data we have today, and can use it to inform decision-making and actions. Business strategy is increasingly reliant on big data, which is also used to train and improve AI applications. A key distinction between outright “data” and “big data” is often referred to as the 3 Vs. Big data is characterized by volume (large size), velocity (growing fast), and variety (diverse sources, including social media, databases, and applications, both physical and digital). Whether we call it data or big data, if we seek to apply it to the future, the considerations remain the same. Data does not predict anything beyond the modeled assumptions of a system with stabilized parameters. Such predictive analytics can be invaluable in controllable, specific domains, where machine learning and pattern recognition can be applied to nearly infinite simulations. However, complex environments are dominated by unknown variables, and tend to be unpredictable. Correlation can only be established retrospectively with data analysis in these environments, and causality can be difficult to infer.

“Relevance-driven” beats “data-driven”

Relevance is determined when assumptions and data confront the real world. To stay relevant, you cannot lose sight of understanding customer behavior. Clayton Christensen’s Jobs To Be Done (JTBD) reminds us that the reason people buy and use any product or service is to get a specific job done. It is no coincidence that Amazon’s first leadership principle is Customer Obsession (“Leaders start with the customer and work backwards… Although leaders pay attention to competitors, they obsess over customers“). Given the extent of competition, most competitive advantage is fleeting. Yahoo had the first-mover advantage, but Google became the dominant search engine. Google’s obsession with customers is to stay relevant. This drives their mantra to focus on the user by creating new, surprising, and radically better products.

True innovation, just like the future, is not measurable at its inception

However valuable data can be to inform decision-making, the challenge lies in measuring the unmeasurable. Breakthrough innovation is a truly novel act of discontinuous creation (not merely an improvement to an already-extant object). New and surprising tend not to be conducive to ex ante data.

The value of data is to inform relevant decision-making, not to be prescriptive

By testing what can be tested and measuring what can be measured, dynamic intelligence is generated over time. Analysis can reveal insights for a specific range of quantifiable data sets. Machine learning offers real-time feedback loops, creating an evolutionary process in which the outputs are reused as future inputs. This supports smarter decisions with updated interpretations of the results from our daily experiments. These insights of the past and the emergent present can inform decision-making today and tomorrow, despite being anchored in the past.

Counterintuitively, limiting reliance on data releases its superpowers

While insights derived from data can be powerful, understanding the limitations of data releases its true superpowers. When you integrate that at any point in time, data on the future is nonexistent, you keep an open mind to the endless possibilities. You can use feedback loops in decision-making to help anticipate shifts and change, but not become a prisoner to what sample data seems to be signaling. Effective decision-making can prevail despite speed and uncertainty when there is room for experimentation as an emergent process in relation to open and unwritten futures. The value of data is to inform evolutionary decision-making, not imprison. This dynamic process continues by actioning decisions while benefiting from experiential feedback and adjusting future decisions based on earlier results. This multilayered approach acknowledges the various possible futures ahead. Quantifiable and unquantifiable; objective and subjective; measurable and unmeasurable drivers of change all contribute to imagining the colorful kaleidoscope of possible eventualities and help build the capacity to be future-savvy.

In our UN-VICE world (UNknown, Volatile, Intersecting, Complex, Exponential), expect discontinuity, instability, shocks, and randomness. These dynamic unpredictable systems are difficult to model as conditions constantly change and new factors emerge. Irreducible complexity is not conducive to being prescriptive, but we can still shape the future without a dataset on the future – we simply need imagination.

The insights derived from data can be invaluable as a feedback loop to decision-making, but should never be confused with being a proxy for the future, a predictor of the future, nor the future itself.

Note:

Roger Spitz is the lead author of The Definitive Guide to Thriving on Disruption (Disruptive Futures Institute, 2022), from which this article is adapted.

The post How Reliable is Big Data in a Constantly Changing, Unpredictable World? appeared first on Datafloq.