A few months after I summarised and wrote my series on Tetlock’s forecasting work, and two after I attempted to forecast in service of a real world decision at the start of the coronavirus pandemic, I’ve begun to take a more measured view of forecasting.
In a paragraph: when it comes to real world performance, I’m beginning to think that speed of adaptation is more important than accuracy of forecasting. This is true of people, and also true of organisations.
This seems like an obvious, ‘duh’ sort of conclusion, but I want to own up to my mistakes here. I think I sort of fell into the rabbit hole of Tetlock’s research and got excited about pushing the forecasting frontier. I did not pause to think “hmm, so these ideas are really cool and all, but how do I apply this to my life?”
I’m coming around to this way of thinking as a result of three catalysts.
The first was — as I’ve mentioned earlier — that I attempted to apply the superforecasting playbook to my decision around purchasing a flight back to Vietnam. I’ve written about that experience elsewhere, so I’m not going to repeat my conclusions here.
The second catalyst happened when I signed up for and was accepted into the second Good Judgment Project. I realised, quickly, that the amount of thoughtfulness and time needed to get good at predictions was way above what I was willing to put into it.
Tetlock himself says something similar in a recent interview:
COWEN: If you could take just a bit of time away from your research and play in your own tournaments, are you as good as your own best superforecasters?
TETLOCK: I don’t think so. I don’t think I have the patience or the temperament for doing it. (emphasis added) I did give it a try in the second year of the first set of forecasting tournaments back in 2012, and I monitored the aggregates. We had an aggregation algorithm that was performing very well at the time, and it was outperforming 99.8 percent of the forecasters from whom the composite was derived.
If I simply had predicted what the composite said at each point in time in that tournament, I would have been a super superforecaster. I would have been better than 99.8 percent of the superforecasters. So, even though I knew that it was unlikely that I could outperform the composite, I did research some questions where I thought the composite was excessively aggressive, and I tried to second guess it.
The net result of my efforts — instead of finishing in the top 0.02 percent or whatever, I think I finished in the middle of the superforecaster pack. That doesn’t mean I’m a superforecaster. It just means that when I tried to make a forecast better than the composite, I degraded the accuracy significantly.
In Superforecasting, Tetlock wrote that top performance in the IARPA tournaments was like walking a tightrope — even the slightest mistake would mean taking a tumble in the rankings. This shouldn’t be surprising to us — exceptional performance in most fields of human activity is exactly like this. But it’s one thing to know this and quite another to experience the hard work needed to maintain that performance.
So I balked at the work involved. I didn’t think it was worth my time to get really good at making well-calibrated forecasts — at least, not when evaluated with reference to my goals. And I also thought this because I began to look into the experience of organisations that thrived under uncertain conditions. Did they necessarily make better predictions?
The catalyst for this third line of investigation came from reading Scott Alexander’s post A Failure, But Not of Prediction. In it, Alexander makes the common-sense observation that ‘Making decisions is about more than just having certain beliefs. It’s also about how you act on them.’
The non-pandemic example that he gives is about cryogenics. A few years earlier, Alexander surveyed LessWrong members and asked them how likely they thought cryogenics would work. The ones who had pre-paid for brain freezing said they estimated a 12% chance of cryogenics working and their getting resurrected. The ‘control’ group — that is, the group that thought it was insane — estimated around a 15% chance of it happening. So the group that had paid believed cryogenics were less likely to succeed than the group that didn’t. What was going on here?
Alexander writes:
I think they were (both) actually good at probabilistic reasoning. The control group said “15%? That’s less than 50%, which means cryonics probably won’t work, which means I shouldn’t sign up for it.” The frequent user group said “A 12% chance of eternal life for the cost of a freezer? Sounds like a good deal!”
The point, of course, is that well-calibrated probabilistic estimations are one half of a decision-making equation. The other half is taking action on the estimates. And taking action depends on your orientation: that is, how you see the world and how you do cost-benefit analyses around the expected value of those estimates. One person might poll a group of virology experts in late January and get “there’s a 10%-20% chance this becomes a global pandemic and kills hundreds of thousands of people” and take prudent preparatory action; another person might poll the same group of experts and decide 10%-20% was a low enough estimate to go kite surfing over spring break.
Generating Forecasts, Consuming Forecasts
A mentor and I had a heated conversation a few weeks ago, where we debated Nassim Nicholas Taleb’s ongoing disagreement with Phillip Tetlock. My mentor said something along the lines of: “well, you can get all the best forecasters in the world you want and shove them into the intelligence community, but you’d probably get better results building a competent government instead.”
This was, of course, an extremely flippant comment. It was a bit like saying “well, you can work hard all you want to get that promotion, but you’re better off just getting rid of all the racism in the world” — which is technically true … but practically speaking rather useless. Many countries can’t build a competent government in a decade; individuals can't be expected to rid the world of racism. But in an ideal world you want to work on both things at the same time. You want to play the career game given the constraints of a biased career ladder, and you want to fight against racist standards. You want to improve the forecasting capabilities of the intelligence community and you want to improve government competence at consuming that intelligence while you do it.
In an ideal world (and this is the operative clause) the failure of one side does not justify non-improvement on the other. Tetlock’s work is valuable because it helps prevent a repeat of the failures of the US intelligence community that led to the Iraq war. Incompetence of the current administration does not diminish what he (or IARPA) was trying to do.
But my mentor’s flippant comment also contained a kernel of truth. Forecasting is difficult. The forecasting frontier (that is, the absolute limit for human forecasting performance) is fairly low. On February 20th this year, Tetlock’s superforecasters predicted only a 3% probability that there would be 200,000+ coronavirus cases a month later (there were). That isn’t to say that they did badly — their forecast was probably the best calibrated estimate given everything we knew about the virus at the time. But it goes to show that even the best forecasting institutions in the world can’t handle certain extremely uncertain situations.
So what should we take away from this? My takeaway is that — maybe — we’d be better off if we assume forecasting is ‘too hard’ most of the time, and make our decisions as if that were true.
What Organisations Do This?
What happens when you have an organisation that assumes forecasting uncertainty is ‘too difficult’ but is then great at responding to that uncertainty?
I think you’d get Koch Industries.
Koch is known primarily as an evil energy conglomerate. But that doesn’t detract from its effectiveness. Think about what Charles Koch managed to accomplish: he grew the company his father left him into the second largest privately-held conglomerate in the United States within a couple of decades, with little-to-no debt and zero outside investment. He then used that money to fund conservative, anti-environment agendas in the US (which rightly earned him his evil reputation). But my takeaway from that story is that he and his company is nothing if not effective. Best to learn what they did well, and put that to better causes.
More importantly for our purposes, Koch started his journey during the 70s and 80s, in some of the most volatile oil markets in history. Christopher Leonard, in his book Kochland, writes about the transformation Koch undertook in 1973, right after the Yom Kippur War triggered an oil shock that reverberated around the world:
The price shock caused a calamity inside Koch Industries. Charles Koch had been quietly expanding a profitable segment of the company, a shipping division that carried crude oil on oceangoing tankers. Strong demand for US oil imports created a small boom for oil tankers, and Koch Industries signed leases to carry crude around the world. The money was so good that Charles Koch decided to make a giant bet on the business by building a “supertanker” of his own. He named it after his mother, Mary R. Koch, then in her midsixties. What Charles Koch didn’t realize was that he was making a giant, one-directional bet on the future of oil imports. When production plummeted, the bet left him exposed. The shipping market was plagued by crippling excess capacity, almost overnight. The value of the Mary R. Koch plunged, and Koch was obligated to money-losing shipping leases.
The mid-1970s were a period of economic crisis for both Koch Industries and the United States. The years of inflation, recession, and energy shocks transformed America’s political and economic landscapes. This period also shaped Koch Industries. In response to the crisis, Charles Koch began to transform the company into into an institution that was built for the new era of volatility. The changes made during this time laid foundations for Koch Industries that remained in place for decades. Charles Koch aimed to build a corporation that would not only survive the brutal swings of the marketplace, but profit from them. He built a company that learned constantly from the world around it and prized information discovery above almost everything. It was a company that embraced change and hated permanence, one where every division would be up for sale all the time. He built a structure with centralized control — which emanated from his boardroom — but that also gave managers and employees a remarkable level of freedom. He fused the sophisticated management techniques he learned as a consultant in Boston with the folk wisdom of his mentor Sterling Varner and the free-market religion of thinkers like Hayek and von Mises. Also during this time, Charles Koch built a political action network that he operated in tandem with Koch Industries’ business, creating a public influence operation that was arguably unique in the history of corporate America.
Even in the face of a downturn, Charles Koch invested heavily in Pine Bend to ensure its long-term profitability. But investing money alone wasn’t at the heart of Koch’s efforts to transform Pine Bend. The effort would not be built on cash — it would be built on information. In the face of unprecedented market volatility, Charles Koch and his team adopted a strategy that would inform Koch Industries for decades. It relied on deep analysis and information gathering. Charles Koch couldn’t control the market’s violent ups and downs, but by understanding them better, he could beat his competitors. (emphasis added)
The rest of the book examines the various things Koch did over the next three decades to enable his company to re-orient quickly in response to volatility. Koch began to treat each subsidiary as an information gathering node in a larger information network (i.e. a market). It organised its business units to track and pass along information from its prices, suppliers, and customers. It bought and used computers to collate such information in the early 80s. Koch later got rid of budgets in order to enable his managers to act with autonomy, under a regime of ROIC. He established firewalls within the conglomerate to protect subsidiaries from each other. And he revised his organisational structure — through repeated trial and error — to facilitate the assignment of decision-making rights in response to the information it was gathering.
The net result is that Koch Industries flowed like water: moving opportunistically into neighbouring industries over the course of a few short decades. As a company, they never did need to predict the future to do well. They could simply collect, observe, orient, and then act faster than anyone else in their industry in response to changing macro-economic conditions.
And Koch isn’t the only one like this.
On the 17th of April this year, Charlie Munger was interviewed in the WSJ by longtime friend Jason Zweig. I quote:
Surely hordes of corporate executives must be calling Berkshire begging for capital?
“No, they aren’t,” said Mr. Munger. “The typical reaction is that people are frozen. Take the airlines. They don’t know what the hell’s doing. They’re all negotiating with the government, but they’re not calling Warren. They’re frozen. They’ve never seen anything like it. Their playbook does not have this as a possibility.”
He repeated for emphasis, “Everybody’s just frozen. And the phone is not ringing off the hook. Everybody’s just frozen in the position they’re in.”
With Berkshire’s vast holdings in railroads, real estate, utilities, insurance and other industries, Mr. Buffett and Mr. Munger may have more and better data on U.S. economic activity than anyone else, with the possible exception of the Federal Reserve. But Mr. Munger wouldn’t even hazard a guess as to how long the downturn might last or how bad it could get.
“Nobody in America’s ever seen anything else like this,” said Mr. Munger. “This thing is different. Everybody talks as if they know what’s going to happen, and nobody knows what’s going to happen.”
Is another Great Depression possible?
“Of course we’re having a recession,” said Mr. Munger. “The only question is how big it’s going to be and how long it’s going to last. I think we do know that this will pass. But how much damage, and how much recession, and how long it will last, nobody knows.”
He added, “I don’t think we’ll have a long-lasting Great Depression. I think government will be so active that we won’t have one like that. But we may have a different kind of a mess. All this money-printing may start bothering us.”
Can the government reduce its role in the economy once the virus is under control?
“I don’t think we know exactly what the macroeconomic consequences are going to be,” said Mr. Munger. “I do think, sooner or later, we’ll have an economy back, which will be a moderate economy. It’s quite possible that never again—not again in a long time—will we have a level of employment again like we just lost. We may never get that back for all practical purposes. I don’t know.”
Note how many times Munger simply says “I don’t know.” He is — to use his own words — dropping the act of forecasting into the ‘too hard’ pile.
Munger is observing, orienting, and then moving when he thinks it is prudent … but before everyone else.
I think there’s something here that’s worth talking about. Rapid orientation in response to uncertainty is a more tractable solution than creating well-calibrated forecasts for the future.
I want to understand what this means in a practical sense. I want to know what that looks like when applied to businesses, and to careers. I write this because I’m looking into a body of work that deals with this sort of thing; this is the study of John Boyd, and George Stalk Jr, and the W. Edwards Deming stuff that Charles Koch used to build Koch Industries.
I’ll report here when I’ve more to say.
Originally published , last updated .