Forecasting Under Uncertainty

Feature image for Forecasting Under Uncertainty

Table of Contents

Sign up for the Newsletter

    Once a week. Three links. No spam. Unsubscribe anytime.

    I remember reading Superforecasting: The Art and Science of Prediction on a hospital bed. An insect bite had become infected, my leg swelled up into a red balloon, and I found myself in a hospital ward shortly thereafter. To while away the time, I read Superforecasting. This was a couple of months after the book was first published in 2015; I remember skimming through several interviews with Tetlock as he went on tour to promote it. There was significant buzz on Twitter.

    I finished the book a day before I was discharged.

    I bring this up because my encounter with the insect and my subsequent hospitalisation was a completely unpredictable event. I was visiting a friend in Singapore, and took a short cut across a field. Somewhere along the short cut, an insect must’ve bitten the back of my thigh. I remember scratching my leg on the bus ride home. If I hadn’t visited the friend, or taken a short cut, or scratched the bite, things might’ve turned out differently.

    In the academic literature on judgment and decision making, my little accident belongs to a class of unpredictability that we call ‘uncertainty’.

    I would give you a blank stare if you were to ask me the probability of contracting cellulitis from an insect bite on a day out in Singapore. I would tell you that it was incredibly unlikely — a ‘tail risk’, at best. I would hesitate to give you an exact number. And I would argue that it is better if you not think about such things; tomorrow, for instance, you could be driving home and be in the exact spot at the exact time where a large metal pole shears off from the top of a skyscraper, falls 21 floors, and crushes you instantly. This happened to a man named Lim Chin Aik in Penang in 2013 — a fact that I think about whenever I walk in the shadow of a tall building.

    Uncertainty is different from risk. Risk is quantifiable. Uncertainty is not. The price of grain you buy from your supplier fluctuates within a band of a few dollars ... this is risk. You may hedge this risk by buying grain futures. But the price of grain ten years into the future is uncertainty. It is impossible to quantify*.

    (* Tetlock’s research shows, to a good approximation, that a forecast for any event more than five years out is as good as chance. One implication of this is that if you discover a forecaster opining confidently about some event more than five years into the future — treat that forecaster like you would an idiot.)

    Why is the price of grain ten years into the future so difficult to quantify? The reason is that history is path-dependent. Risk exists when the future is like the past. Uncertainty happens when the future is different. Low probability ‘tail’ events may occur to change the very foundations of our world. Ten years from now, perhaps grain becomes only a couple of dollars more expensive, keeping pace with inflation. But perhaps a virus wipes out all the grain in the world, and humanity shifts to potatoes as a staple crop. This is incredibly unlikely — about as unlikely as the notion of a world war in the decade before World War 1. But if it happens, then it happens. There is good reason, after all, that the saying goes ‘history is just one damn thing after another’: it really is just one discontinuous thing after another.

    None of this is new. Nassim Nicholas Taleb has written many thousands of words in his various books, arguing that tailed risks affect probabilities in ways that make probability distributions unreliable. Way back in 1937, the great economist John Maynard Keynes himself wrote about the differences between risk and uncertainty:

    By “uncertain” knowledge, let me explain, I do not mean merely to distinguish what is known for certain from what is only probable. The game of roulette is not subject, in this sense, to uncertainty; nor is the prospect of a Victory bond being drawn. Or, again, the expectation of life is only slightly uncertain. Even the weather is only moderately uncertain. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper and the rate of interest twenty years hence, or the obsolescence of a new invention, or the position of private wealth-owners in the social system in 1970. About these matters there is no scientific basis on which to form any calculable probability whatever. We simply do not know.

    Into this landscape stepped Tetlock and Gardner’s book. It’s taken me a few years to appreciate why Superforecasting was so remarkable at the time of its release. It is remarkable because it argues that you can come up with calculable probabilities for all the sorts of things that Taleb and Keynes say are impossible to quantify. These calculable probabilities aren’t perfect, but they are better than nothing — and they are useful enough to be deployed in the service of intelligence analysis.

    More importantly for us, however, the book does not make its case by arguing from principle. The book does this by arguing from empirical result. It lets reality be the teacher.

    The Concise Case for Forecasting

    I’m going to save my summary of Tetlock and Gardner’s book for another day — it’s been a few years since I read Superforecasting, and I’ll need time to reread the book and to do their ideas justice. But a couple of weeks ago I came across a paper by Tetlock and Joseph Cerniglia in the Journal of Portfolio Management, about applying the Superforecasting process to the business of stock picking. This was surprising to me; I had written off Tetlock’s techniques in 2016 because the research was focused on geopolitical forecasting — something that had little to do with my own field of work at the time (read: running operations for a business in Vietnam).

    In this new paper, Tetlock argues what should have been obvious to me all those years ago: if expertise is having a predictive model that works, then using the techniques of the Good Judgment Project is one way to accelerate the acquisition of expertise — at least, in analytical work. We should also expect the techniques of the GJP to be particularly well-suited to acquiring expertise in ‘irregular’ domains — domains like stock picking, technology investing, or competitive analysis.

    This was intriguing enough to explore — and so I started playing with the process in a few initiatives I've been involved with. I'll talk about the results of those experiments in a separate post.

    This week, however, I want to tackle the elephant in the room: how is it that forecasting is even possible? Isn’t probabilistic forecasting a crapshoot when dealing with uncertainty (as Taleb so gamely likes to remind us)? And wouldn't coming up with probabilities for such path-dependent forecasts trick us into a false sense of confidence about the future?

    These questions aren't merely theoretical. In 2010, the IARPA (Intelligence Advanced Research Projects Activity) started funding a research initiative named the Aggregate Contingent Estimation Program. ACE was a competition to determine the best way to ‘elicit, weight, and combine the judgment of many intelligence analysts’. It cost IARPA millions of dollars and lasted five years.

    To a flaneur like Taleb, forecasting under uncertainty is impossible and you shouldn’t even bother. But to IARPA, this was an important hypothesis to verify. The intelligence community remains a major purchaser of forecasting services. It seemed worthwhile to study the possibility of improving such forecasts. Tetlock and Cerniglia write:

    These empirical discoveries were enabled by a conceptual insight. IARPA’s scientific advisers recognized that even if they conceded the entire argument to the skeptics and granted that every event skeptics classify as unique is indeed one of a kind, it would still be the case that these events would have something important in common: their alleged uniqueness. We thus have a logical warrant to treat all such “unique” events as constituting a set in their own right. Let’s call it the set of allegedly unique events, or the sui-generis set. This logical move sets the stage for transforming a stalemated semantic debate into an empirical test. If it were impossible to assign meaningful probability estimates to events in the sui-generis set, then the numbers that forecasters put on these events should be meaningless and it should also be impossible to (1) find forecasters who are consistently better than others at estimating the likelihoods of such events; (2) train people to become better forecasters; (3) set up high-performing teams; or (4) design algorithms that outperform a simple averaging of the essentially meaningless forecasts.

    IARPA now had a series of scientifically testable hypotheses and promptly funded a series of forecasting tournaments to test them. The results were conclusive: Each of the four just-listed impossibilities proved possible. Unique events were not as unique as skeptics supposed.

    This is what I mean when I say that Superforecasting makes its arguments based on empirical result. As practitioners, we want to pay attention to techniques that have been tested against reality (whenever possible). Tetlock’s Good Judgment Project appears to pass this bar. I was surprised at the wholly empirical stance Tetlock appears to take, both throughout his book, and in the public comments about his work.

    I’ll close this section with a short note from Tetlock on the tension between Klien-and-Gigerenzer-style approach to decision making under uncertainty (embrace biases and heuristics as effective tools of decision-making!), and the Kahneman-and-Tversky approach of ‘bias adjustment’ (override System-1 biases and heuristics!).

    It turns out that Tetlock doesn’t care about this debate; he only wants to know what works:

    I don't have a dog in this theoretical fight. There's one school of thought that puts a lot of emphasis on the advantages of blink, on the advantages of going with your gut. There's another school of thought that puts a lot of emphasis on the value of system two overrides, self-critical cognition … giving things over a second thought. For me it is really a straightforward empirical question of what are the conditions under which each style of thinking works better or worse?

    In our work on expert political judgment we have generally had a hard time finding support for the usefulness of fast and frugal simple heuristics. It's generally the case that forecasters who are more thoughtful and self-critical do a better job of attaching accurate probability estimates to possible futures. I'm sure there are situations when going with a blink may well be a good idea and I'm sure there are situations when we don't have time to think. When you think there might be a tiger in the jungle you might want to move very fast before you fully process the information. That's all well known and discussed elsewhere. For us, we're finding more evidence for the value of thoughtful system two overrides, to use Danny Kahneman's terminology.

    Longtime readers of Commonplace will know that I have a particular soft-spot for Gary Klein and co’s Naturalistic Decision Making paradigm; I feel that the popsci cult that’s developed around Kahneman’s approach to decision-making overhyped and overdone.

    But Tetlock’s pragmatism appeals to me. It’s pretty clear that both approaches have value, depending on the domain you apply them to. And I happen to hold the principle of testing against reality more tightly than I do a belief in any one side of the decision-making debate. GJP’s results tell me — if nothing else — that it’s worthwhile to give it a try.

    Uncertainty and Careers

    Investor Jerry Neumann has a wonderful blog post about the importance of uncertainty for the creation of startups. He writes:

    Imagine you work for a large company and propose a plan for a project that has a quantifiable chance of producing a specific multiple of its outlay within a set period of time. That is, the project is risky, but measurably risky. The sort of thing where the recommendation memo says something like “The project has a 50% chance of a 3x return on investment within two years.” Rational managers can decide whether to proceed with the project based on its calculated expected outcome. Insurance companies and casinos do this as a matter of course and every business does it to some extent. This is a quantifiable risk with a positive expected outcome and the decision can be defended no matter how it turns out.

    Now imagine walking into your boss’s office and presenting an investment rife with uncertainty. “How likely is this to succeed?” your boss asks. “I don’t know.” you say. “How big will it be if it works?” your boss asks. “I don’t know.” you say. “Why don’t you know?” “Because the customers may be different than who we think; because the customers may want a somewhat different product; because the other companies we need to produce complementary products may decide not to.” Etc. “Well,” your boss says, “we’ll just have to wait until we know those things before we can make a decision.”

    Unfortunately, these things and many others may be uncertain—meaning they can’t be known beforehand, the information does not yet exist. Your boss will never approve the project. Many entrepreneurs find themselves in the same situation: facing these substantial uncertainties. But without the boss they can go ahead with the project anyway. The entrepreneur may decide to act, while the boss will not, because the entrepreneur does not need to explain themselves to anyone.

    Big companies shrink away from uncertainty but embrace risk. Similarly, humans are far more tolerable of risk than they are of uncertainty. The Ellsberg Paradox is an experiment where you are offered two urns — the first with 50 black balls and 50 red balls, the second with an unknown mix of 100 balls that are either red or black. When told that you will win $100 if you pick a red ball out of either urn, most people prefer to pick from the first urn, the one with the known quantity. They simply prefer a known probability of winning to the unknown probability of the second urn.

    Career moves are more often than not like picking from the second urn. I've written before that one approach to building career moats is to bet big on the rise of a discontinuous innovation. I described a friend who is betting her career on the rise of Alternate Reality/Virtual Reality interfaces. If the technology plays out the way she thinks it will, she will reap the benefits of an early competitive advantage in the field. If it doesn't, then she would have ‘wasted’ five years of her life.

    (I say ‘wasted’ because careers, like history, are path-dependent — who is to say that her path won't lead her to other, unexpected, opportunities. Uncertainty is truly uncertain.)

    Many career decisions are difficult because they deal with uncertainty, not risk. Tetlock and his collaborators in the Good Judgment Project show us that humans aren’t entirely useless when faced with uncertainty. But they also show us the exact limits of our powers. Forecasting is difficult and improvement is hard, but this is a long way from saying that the future is unknowable and all analysis is futile.

    Next week, we'll look at how superforecasting works, and then examine, critically, how Tetlock’s ideas might be applicable to the average career.

    Note: this post is part of The Forecasting Series. The next post is available here: How Do You Evaluate Your Own Predictions?

    Originally published , last updated .

    Member Comments