Note: this is Part 4 in a series of blog posts about becoming data driven in business. You may want to read the prior three parts (one, two, three) before reading this essay.
In our previous essay we talked about the peculiar idea — taken from the field of Statistical Process Control (SPC) — that “there is no truth in business, only ‘knowledge’”. ‘Knowledge’ is defined as “theories or models that lead to better predictions”. I argued that when you combined this unusual idea with the related notion that ‘management is prediction’, you get the type of thinking that underpins operational excellence. To put it simply: “if you want to get good at running businesses, you need to pursue knowledge — that is, theories or models that allow you to better predict the business outcomes of your actions. Such a pursuit more often than not leads you towards the use of data.”
But I also said that this reads like a stupidly simple truism. As a result, we mostly talked about the practical implications of the idea. I walked you through two instantiations of this worldview at Amazon, along with a few quirks you’d face if you planned to put the idea to practice.
While practical implications are all well and good, I think our focus on the practical belies the true impact of the idea. Deming’s assertion was a huge revelation to me — the very idea that it is knowledge that you want to pursue in business, not truth. In this essay, I want to take a step back to discuss the philosophical implications of taking this idea seriously.
A Philosophy for Business Judgment
“Wait,” I can already hear you say, “Philosophy is stupid! Philosophy in the context of an extremely applied domain like business is even stupider. Why talk about the philosophical implications of a business idea?”
My answer to that question is that this is useful because it is the sort of philosophy that has to do with truth. The proper name for this branch of philosophy is ‘epistemology’, which is really a fancy-ass way of asking “how do you know what you believe is true?”
Let’s think about that for a moment: how do you know that the things you believe about your business is true? Like most people, you probably don’t give it much thought. You know certain things like “maintain good relationships with your suppliers” and “Facebook ads work for my business, but have actually kinda sucked in the past two years” — stable beliefs that change rather slowly.
And yet one of the most difficult things in business is knowing when to discard old beliefs you have, especially when there is a fundamental change to your world. History is littered with stories of businesspeople who believed some set of things about their businesses, and who were then caught off guard when the nature of their industries changed from underneath them. These changes come in many forms; perhaps you’ve heard of a few of them.
The most famous set of stories come from business professor Clayton Christensen, who introduced a theory of disruption with his 1997 book The Innovator’s Dilemma. Christensen’s theory of disruption is, incidentally, the textbook example of ‘businesspeople think they know X about their industries, and then they keep doing X while Y occurs, all the way until their companies die sorry deaths’.
The story goes something like this: in the 90s, Christensen started asking himself why business success was so difficult to sustain. He noticed that in the steel market, the integrated steel companies — large, high margin steel producers — had over a period of three decades all gone bankrupt. These companies were killed by the so-called ‘mini mills’. These mini mills were lower cost, lower quality steel producers that started out in the mid-60s processing scrap and spitting out rebar — the lousiest, 100% commodified, lowest margin steel product. The integrated steel producers saw their market share erode in the rebar sector, and ceded ground willingly: they were happy to move upwards to more lucrative products. Over the next few decades, however, the mini mills got better at producing steel of increasingly higher quality, and pushed the integrated steel producers upwards with each innovation. This was fine and good up to the point where the incumbents were pushed to the tippy top of their markets and then went out of business.
Christensen noticed that the same pattern recurred elsewhere. In the disk drive market, for instance, manufacturers of smaller disk drives with lower quality gradually improved, until they killed the larger format disk drive manufacturers; in the excavator market, the same thing had occured, but with hydraulic excavators displacing large steam shovels over a period of decades.
What made Christensen’s theory so arresting was that he did not blame the incumbent executives in all of his examples. No, Christensen argued, executives were doing the rational thing in each case — they were pursuing products with higher margins, ceding ground on less attractive opportunities at each step of the way, up to the point where the lousier, disruptor technology became good enough for their highest value customers, at which point the incumbent businesses failed. The main culprit, Christensen reasoned, was the way managers had been taught to measure success. From a New Yorker profile:
After puzzling over this mystery for a long time, (Christensen) finally came up with the answer: it was owing to the way the managers had learned to measure success. Success was now measured not in numbers of dollars but in ratios. Whether it was return on net assets, or gross-margin percentage, or internal rate of return, all these measures had, in the past forty years, been enshrined into a near-religion (he liked to call it the Church of New Finance) by partners in hedge funds and venture-capital firms and finance professors in business schools. People had come to think that the most important thing was not how much profit you made in absolute terms but what percentage of profit you made on each dollar you put in. And that belief drove managers to shed high-volume but low-margin products from their balance sheets, even though nobody had ever come across a bank that accepted deposits in ratios. This was why he called it a church: it was an encompassing orthodoxy that made it impossible for believers to see that it might be wrong (emphasis added).
Christensen’s story is sobering because it is difficult to imagine overcoming your orthodoxy if you are an executive in one of those incumbent companies. But the New Yorker profile also includes an appearance by legendary Intel CEO Andy Grove, the protagonist of our next story. Grove is perhaps proof positive that it is possible to update one’s beliefs, especially if one is an exceedingly paranoid businessperson. The article continues:
Grove had sensed that something was moving around at the bottom of his industry, and he knew that this something was threatening to him, but he didn’t have the language to explain it precisely to himself, or to communicate to his people why they should worry about it. He asked Christensen to come out to Intel, and Christensen told him about the integrated mills and the mini mills, and right away Grove knew this was the story he’d been looking for. He had Christensen tell the same story to his staff, and “rebar” became a company mantra. Intel brought out the Celeron chip, a cheap product that was ideal for the new low-end PCs, and within a year it had captured thirty-five per cent of the market. Soon afterward, Andy Grove stood up at the comdex trade show, in Las Vegas, holding a copy of “The Innovator’s Dilemma,” and told the audience that it was the most important book he’d read in ten years. The most important book Andy Grove had read in ten years! A man from Forbes was in the audience that day, and in 1999 Grove and Christensen appeared together on the cover of Forbes, and things were never the same for Clayton Christensen again.
In Only the Paranoid Survive, that same Intel CEO, Andy Grove, talks about a period of crisis in the company’s history. This period wasn’t about low-end disruption, but it might as well have been, so similar were the root causes. In 1994 Intel was caught off guard by a firestorm of consumer anger, in response to a flaw in Intel’s Pentium 4 processor. In the past, Intel had dealt with such flaws by fixing the chip design and issuing replacements commensurate to the scale of the problem ... but primarily to computer manufacturers. This was rational: it regarded itself as a component supplier to such manufacturers, far removed from the end user. Imagine Intel’s surprise, then, when the mainstream press picked up the story, and thousands of furious computer users contacted the company directly, requesting for refunds. It took a year for Grove to realise that Intel had become a consumer-facing company — in retrospect, a totally predictable outcome thanks to their remarkably successful ‘Intel Inside’ ad campaign. But the realisation came late. At the time of the crisis, Grove and other Intel execs could not update their mental model of their own company and were paralysed by the consumer backlash; to end the crisis, Intel had to do a recall of Pentium 4 chips, along with junking all of existing Pentium inventory. They lost half a billion dollars — four years of Pentium’s ad budget — in a mere six weeks.
Lest you think that business misjudgments come from avoidable mistakes, I should note that some business misjudgments come from totally incomprehensible developments. In Good Synthesis is the Start of Good Sensemaking I told the story of incumbent laminate manufacturer Formica, who was challenged by Ralph Wilson Plastics (RWP) in the 70s. At the time, nobody in American business understood the nature of ‘Process Power’ — that is, the competitive advantage where org design, company culture and specialised production processes come together over an extended period to produce lower costs and superior product. Widespread understanding of Process Power only emerged in the 80s. (We now know the phenomenon with terms like ‘lean manufacturing’ and ’continuous improvement’). I told this story from the perspective of Formica’s CEO because I wanted to capture the sheer confusion they must’ve felt as RWP took market share away from the company; I chose to present this specific case because RWP was wielding a competitive advantage that Formica simply did not — and could not — understand.
No Fixed Truths
It’s really tempting to read these stories and go “well, if I were in their shoes, I wouldn’t have made the mistakes they made.” But I don’t think this is the case. Many of these business incidents were of executives no smarter nor dumber than you or I. In the case of Formica, the competitive advantage they were up against had no name. And in the case of Intel, then-CEO Andy Grove was widely regarded as one of the best executives to have ever worked in Silicon Valley. If these events occured to such people, new versions are likely to occur to us, today.
And so I’ve long wondered about the sort of thinking you’d need to do to escape this sort of trap. How do you prevent yourself from making this mistake? One trite response is to quote Mark Twain: “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so” ... as if watching out for ‘things you know that just ain’t so’ is that simple to do. Another, commonly cited answer is to say that you’ll need to have ‘strong opinions, weakly held’ — which, as I’ve noted elsewhere, mostly doesn’t work that well.
Deming’s formulation of ‘knowledge’ over truth gives us a third path. Here’s Ed Baker, former Ford executive and close Deming associate, in The Symphony of Profound Knowledge:
Deming’s criterion of knowledge is whether it helps us to predict and not whether we discover truth, because there is no such thing in the domain of empirical knowledge (emphasis mine). In the empirical world, statements are only probable rather than true and absolute. If we can predict, then we have knowledge. We could have a beautifully constructed theory that has little or no relevance to the real problems that people face. Euclidean geometry, Plato’s forms, the normal curve, and other examples of abstract reasoning are true in their own world of mind, regardless of whether they apply to the empirical world. A theory that is internally consistent (i.e., true in its own world) has construct validity but may not have predictive validity (emphasis mine). We learn about the ability of a theory to help us in predicting by structuring our predictions to be testable by empirical investigation. A theory is evaluated by future experience, whether in science, in management, or in everyday living. Theories can be revised as learning occurs, and as evidence accrues, we increase or decrease our degree of belief in their ability to help us predict (emphasis mine). Source.
Deming’s framing is interesting to me because it is in opposition to the default worldview that most of us possess. I think many of us treat the domain of business as one of immutable truths. And why wouldn’t we? The idea that the world consists of unchanging principles is a frame that is implicitly communicated to us in primary school, all the way through to most specialisations in university. We think that it is possible to learn several foundational ‘truths’, from which we may derive everything interesting about the reality. This is the basic frame that underpins chemistry and physics, biology and math. Heck, if you’re anything like me, you probably also believe this about your personal relationships. People like us think that we can get better at dealing with others by learning several foundational, unchanging principles, that — once learnt — would serve us well for the rest of our days.
And yet we know that business is a domain where the principles aren’t fixed. We’ve just seen a handful of examples where changing consumer preferences, or shifting industry dynamics, or emerging technology may all render what we previously thought to be true to be, well, false. Deming’s argument is useful because it does away with the notion of ‘truth’ in business. All that matters is if your beliefs are predictive.
(With the Christensen example, this is slightly trickier: could the managers of these disrupted companies predicted the downfall of their businesses, so fixated were they on business ratios and not absolute numbers? I think it might be possible — these businesses would not have been fun to run on their way down. Surely these people could see that their beliefs were not predictive of business outcomes? But the reality is that the managers lacked the language to reason about their predicament, and so I cannot be entirely certain.)
But then, of course, Deming goes slightly further. He argues that it does not matter how internally consistent or wonderfully constructed your beliefs are. Or, to say this differently: it does not matter how compelling the orthodoxy is. Construct validity is insufficient. Predictive validity is what matters. Consequently, business beliefs that are not predictive are discardable.
I find this rather freeing.
Think about some of the actionable implications: if you believe certain things to be true about your business, only to find out that downstream outcomes don’t map to your expectations, Deming’s frame about ‘knowledge’ vs ‘truth’ implies that you should be quicker to re-examine your beliefs. Your beliefs are only valuable in the context of your ability to predict business outcomes. Given Deming’s definition of ‘knowledge’ ... consider: perhaps your knowledge has quite recently ceased to be knowledge?
This frame of ‘knowledge vs truth’ also sheds light on Amazon founder Jeff Bezos’s exhortation to focus on ‘things that don’t change’. If knowledge in business consists of beliefs that are most predictive of business outcomes, then the most valuable set of predictive beliefs are surely those that change the slowest.
Here’s Bezos directly:
I very frequently get the question: “What's going to change in the next 10 years?” And that is a very interesting question; it's a very common one. I almost never get the question: “What's not going to change in the next 10 years?” And I submit to you that that second question is actually the more important of the two — because you can build a business strategy around the things that are stable in time. ... In our retail business, we know that customers want low prices, and I know that's going to be true 10 years from now. They want fast delivery; they want vast selection.
It's impossible to imagine a future 10 years from now where a customer comes up and says, “Jeff, I love Amazon; I just wish the prices were a little higher.” “I love Amazon; I just wish you'd deliver a little more slowly.” Impossible.
And so the effort we put into those things, spinning those things up, we know the energy we put into it today will still be paying off dividends for our customers 10 years from now. When you have something that you know is true, even over the long term, you can afford to put a lot of energy into it.
Deming’s frame also applies to the evaluation of new, practical ideas. Shortly after I started digging into Statistical Process Control a reader pinged me to say something along the lines of “I did a masters in Operations Research and studied SPC in university, and I assure you that it’s not as useful outside of manufacturing as you might think it is.” I responded with the observation that Wheeler, amongst other SPC practitioners, believe that it is applicable. I said that Understanding Variation is filled with examples outside of manufacturing. I said also that Amazon’s early executives seem to have gotten a huge amount of mileage out of applying the process control mindset to their tech business. The reader responded with something like “of course they (the SPC practitioners) would claim that, but it doesn’t work.”
In retrospect, this is an argument with strong construct validity (“I studied SPC very deeply; I know that its ideas are not applicable to fields outside of manufacturing”). But if we use Deming’s lens, an existence proof in an applied domain should have given this reader some pause. If multiple Amazon execs say that the process control worldview has worked for them, and Amazon has clearly used process control tools to achieve its results, then this is a violation of their expectancies. In other words, the opposing claim has predictive validity. And so even if you believe your argument is wonderful and true, perhaps you should set it aside to investigate. Perhaps these practitioners have found a way to apply these ideas in irregular, high innovation technological contexts. Perhaps your argument does not accurately map to certain niches of reality.
I realise that I’m dangerously close to confirming my beliefs with this piece. I’ve long argued that “scientists are interested in what is true; practitioners are interested in what is useful”; my approach to theory and synthesis is biased towards what is practically shown, not academically proven. But I do think that Deming’s ‘knowledge vs truth’ is profound and useful and good. Because in thinking it in business, perhaps you’ll find yourself holding onto your beliefs more loosely. And perhaps in internalising it, you’re more likely to learn something new.
This is Part 4 of the Becoming Data Driven in Business series. The next part is Part 5, here: Process Behaviour Charts: More Than You Need To Know. But there's also a part 4.5, which is only nominally part of the series: The Deming Paradox: Operationally Rigorous Companies Aren’t Very Nice Places to Work.
Originally published , last updated .