This is part of the Operations topic cluster, which belongs to the Business Expertise Triad.

‘Strong Opinions, Weakly Held’ Doesn't Work That Well

Feature image for ‘Strong Opinions, Weakly Held’ Doesn't Work That Well

Table of Contents

Sign up for the Newsletter

    Once a week. Three links. No spam. Unsubscribe anytime.

    There’s a famous thinking framework by ‘futurist’, ‘forecaster’, and scenario consultant Paul Saffo called ‘strong opinions, weakly held’. The phrase itself became popular in tech circles in the 2010s — I remember reading about it on Hacker News or a16z.com or one of those thinky tech blogs around the period. It’s still rather popular today.

    Saffo’s framework — laid out in his original 2008 blog post — goes like this:

    I have found that the fastest way to an effective forecast is often through a sequence of lousy forecasts. Instead of withholding judgment until an exhaustive search for data is complete, I will force myself to make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search for further indicators and information. Iterate the process a few times, and it is surprising how quickly one can get to a useful forecast.

    Since the mid-1980s, my mantra for this process is “strong opinions, weakly held.” Allow your intuition to guide you to a conclusion, no matter how imperfect — this is the “strong opinion” part. Then – and this is the “weakly held” part – prove yourself wrong. Engage in creative doubt. Look for information that doesn’t fit, or indicators that pointing in an entirely different direction. Eventually your intuition will kick in and a new hypothesis will emerge out of the rubble, ready to be ruthlessly torn apart once again. You will be surprised by how quickly the sequence of faulty forecasts will deliver you to a useful result.

    This process is equally useful for evaluating an already-final forecast in the face of new information. It sensitizes one to the weak signals of changes coming over the horizon and keeps the hapless forecaster from becoming so attached to their model that reality intrudes too late to make a difference.

    More generally, “strong opinions weakly held” is often a useful default perspective to adopt in the face of any issue fraught with high levels of uncertainty, whether one is venturing a forecast or not. Try it at a cocktail party the next time a controversial topic comes up; it is an elegant way to discover new insights — and duck that tedious bore who loudly knows nothing but won’t change their mind!

    On the face of it, it all sounds very reasonable and smart. And ‘strong opinions weakly held’ is such a catchy phrase — which probably explains its popularity.

    The only problem with it is that it doesn’t seem to work that well.

    Swimming Upstream Against the Architecture Of The Mind

    How do I know that it doesn’t work that well? I know this because I’ve tried. I tried to use Saffo’s framework in the years between 2013 and 2016, and when I was running my previous company I attempted it with my boss, whenever we convened to discuss company strategy.

    Eventually I read Phillip Tetlock’s Superforecasting, and then I gave up on ‘Strong Opinions, Weakly Held’.

    Why does the framework not work very well? From experience, Saffo’s approach fails in two ways.

    The first way is if the person hasn’t read Saffo’s original post. This is, to be fair, most of us — Saffo’s original idea is so quotable it has turned into a memetic phenomenon, and I’ve seen it cited in fields far outside tech. In such cases, the failure mode is that ‘Strong Opinions, Weakly Held’ turns into ‘Strong Opinions, Justified Loudly, Until Evidence Indicates Otherwise, At Which Point You Invoke It To Protect Your Ass.’

    In simpler terms, ‘strong opinions, weakly held’ sometimes becomes a license to hold on to a bad opinion strongly, with downside protection, against the spirit and intent of Saffo’s original framework.

    Now, you might say that this is through no fault of Saffo’s, and is instead the problem of popularity. But my response is that if an idea has certain affordances, and people seem to always grab onto those affordances and abuse the idea in the exact same ways, then perhaps you shouldn’t use the idea in the first place. This is especially true — as we’re about to see — if there are better ideas out there.

    The second form of failure is if the person has taken the time to look up the original intention of the phrase. In this situation, the failure mode is when you attempt to integrate new information into your judgment. Saffo’s framework offers no way for us to do this.

    Here’s an example. Let’s say that you’ve decided, along with your boss, to build a particular type of product for a particular subsection of the self-service checkout market. You both come to the opinion that this subsection is the best entry-point to the industry: it is relatively lucrative, and you think that it is the easiest customer segment to service.

    What happens to your opinion when you slowly discover that the subsegment is overcrowded? Of course, you don’t find out immediately — what happens instead is that you spot little hints, spread over the course of a couple of months, that many competitors are entering the market at the same time. These are tiny things like competitor brochures lying in the corner table of a client’s office, or pronouncements by industry groups that “they are looking to engage vendors for large deployments”, and then much later, clearer evidence in the form of increased competition in deals.

    “Well,” I can hear you say, “‘Strong opinions weakly held’ means that you should change your opinion when you encounter these tiny hints!”

    But at which point do you change your mind? At which point do you switch away from your strong opinion? At which point do you think that it’s time to reconsider your approach?

    The problem, of course, is that this is not how the human brain works.

    Both forms of failure stem from the same tension. It’s easy to have strong opinions and hold on to them strongly. It’s easy to have weak opinions and hold on to them weakly. But it is quite difficult for the human mind to vacillate between one strong opinion to another.

    I don’t mean to say that people can’t do this — only that it is very difficult to do so. For instance, Steve Jobs was famous for arguing against one position or another, only to decide that you were right, and then come back a month later holding exactly your opinion, as if it were his all along.

    But most people aren’t like Jobs. Psychologist Amos Tversky used to joke that by default, human brains fall back to “yes I believe that, no I don’t believe that, and maybe” — a three-dial setting when it comes to uncertainty. People then hold on to their opinion for as long as their internal narratives allow them to. Saffo’s thinking framework implies that you sit in ‘yes I believe that’ territory, and then rapidly switch away to ‘maybe’ or to ‘no’, depending on the information you receive.

    Perhaps you may — like Jobs! — be able to do this. But if you are like most people, the attempt will feel a lot like whiplash.

    So, you might ask, what to do instead?

    Use Probability as an Expression of Confidence

    The gentler answer lies in Superforecasting. In the book, Tetlock presents an analytical framework that is easier to use than Saffo’s, while achieving many of the same goals.

    1. When forming an opinion, phrase it in a way that is very clear, and may be verified by a particular date.
    2. Then state the probability you are confident that it is correct.

    For instance, you may say “I believe that Tesla will go bankrupt by 31 December 2021, and I am about 76% confident that this is the case.” Or you can be slightly sloppier with the technique — with my boss, I would say: “I think this subsegment is a good market to enter, and I think we would know if this is true within four months. I believe this on the order of 70% ish. Let’s check back in September.”

    (My boss was an ex-investment banker, so he took to this like a duck to water.)

    Tetlock’s stated technique was developed in the context of a geopolitical forecasting tournament called the Good Judgment Project. In 2016, when I read Superforecasting for the first time, I remember thinking that geopolitical forecasting wasn’t particularly relevant to my job running an engineering office in Vietnam. But I also glommed onto the book’s ideas around analysis, because it was too attractive to ignore.

    The truth is that Tetlock’s ideas are not unique to his research group. Annie Duke’s Thinking in Bets proposes the same approach, but drawn from poker, and the ‘rationalist’ community LessWrong has long-held norms around stating the confidence of their opinions.

    More importantly, Duke and LessWrong have both discovered that the fastest way to provoke such nuanced thinking is to ask: “Are you willing to bet on that? What odds would you take, and how much?”

    You’d be surprised by how effective this question is.

    Why is it so effective? Why does it succeed where ‘Strong Opinions, Weakly Held’ does not?

    The answer lies in the ‘strong opinion’ portion of the phrase. First: by forcing you to state your opinion as a probability judgment — that is, a percentage — you are forced to calibrate the strength of your belief. This makes it easier to move away from it. In other words, you are forced to let go of the ‘yes, no, maybe’ dial in your head.

    Second: by framing it as a bet, you suddenly have skin in the game, and are motivated to get things right.

    Of course, you don’t actually have to bet — you can merely propose the bet as a thinking frame. Later, as new information trickles in, you are allowed to update the % confidence you have in your belief. This allows you to see the world in shades of grey; it also allows you to communicate that confidence to those around you.

    Revisiting The Hierarchy of Practical Evidence

    I have one final point to make about this approach.

    Long term readers of this blog would know that my shtick is “apply a technique to my career or to my life, over the period of a couple of months, and report on its efficacy.” Over time, I’ve noticed that techniques are more likely to be effective when they come from believable practitioners. This is what led to my Hierarchy of Practical Evidence.

    Saffo’s and Tetlock’s ideas are drawn from the domain of forecasting. But this post is about thinking, not forecasting; I’m only confident to recommend one over the other because I’ve had enough experience with both as analytical tools.

    But it’s worth noting that Saffo isn’t particularly believable as a forecaster either.

    For much of Superforecasting, Tetlock rails against professional ‘forecasters’, who make vague verbiage statements and issue long form narratives about the future. These forecasters are always able to worm out of a bad forecast, because their pronouncements are carefully worded to provide plausible deniability.

    As I was writing this piece, I skimmed through the book, and was surprised to learn that Tetlock had met up with Saffo over the Good Judgment Project, and had written up the encounter. In that account, Saffo dismisses Tetlock’s research out of hand:

    In the spring of 2013 I met with Paul Saffo, a Silicon Valley futurist and scenario consultant. Another unnerving crisis was brewing on the Korean peninsula, so when I sketched the forecasting tournament for Saffo, I mentioned a question IARPA had asked: Will North Korea “attempt to launch a multistage rocket between 7 January 2013 and 1 September 2013?” Saffo thought it was trivial. A few colonels in the Pentagon might be interested, he said, but it’s not the question most people would ask. “The more fundamental question is ‘How does this all turn out?’ ” he said. “That’s a much more challenging question.”

    So we confront a dilemma. What matters is the big question, but the big question can’t be scored. The little question doesn’t matter but it can be scored, so the IARPA tournament went with it. You could say we were so hell-bent on looking scientific that we counted what doesn’t count.

    Tetlock goes on to defend his approach:

    That is unfair. The questions in the tournament had been screened by experts to be both difficult and relevant to active problems on the desks of intelligence analysts. But it is fair to say these questions are more narrowly focused than the big questions we would all love to answer, like “How does this all turn out?” Do we really have to choose between posing big and important questions that can’t be scored or small and less important questions that can be? That’s unsatisfying. But there is a way out of the box.

    Implicit within Paul Saffo’s “How does this all turn out?” question were the recent events that had worsened the conflict on the Korean peninsula. North Korea launched a rocket, in violation of a UN Security Council resolution. It conducted a new nuclear test. It renounced the 1953 armistice with South Korea. It launched a cyber attack on South Korea, severed the hotline between the two governments, and threatened a nuclear attack on the United States. Seen that way, it’s obvious that the big question is composed of many small questions. One is “Will North Korea test a rocket?” If it does, it will escalate the conflict a little. If it doesn’t, it could cool things down a little. That one tiny question doesn’t nail down the big question, but it does contribute a little insight. And if we ask many tiny-but-pertinent questions, we can close in on an answer for the big question. Will North Korea conduct another nuclear test? Will it rebuff diplomatic talks on its nuclear program? Will it fire artillery at South Korea? Will a North Korean ship fire on a South Korean ship? The answers are cumulative. The more yeses, the likelier the answer to the big question is “This is going to end badly.”

    I call this Bayesian question clustering because of its family resemblance to the Bayesian updating discussed in chapter 7. Another way to think of it is to imagine a painter using the technique called pointillism. It consists of dabbing tiny dots on the canvas, nothing more. Each dot alone adds little. But as the dots collect, patterns emerge. With enough dots, an artist can produce anything from a vivid portrait to a sweeping landscape.

    There were question clusters in the IARPA tournament, but they arose more as a consequence of events than a diagnostic strategy. In future research, I want to develop the concept and see how effectively we can answer unscorable “big questions” with clusters of little ones.

    Saffo’s business is in selling stories about the future to businesses and organisations. He teaches his approach to business students, who would presumably go on to do the same thing. Tetlock’s job is in pinning forecasters down on their performance, and evaluating them quantitatively using something called a Brier score. His techniques are now used in the intelligence community.

    These are two different worlds, with two different standards for truth.

    You decide which one is more useful.

    Closing Thoughts

    So let’s wrap up.

    In my experience, ‘strong opinions, weakly held’ is difficult to put into practice. Most people who try will either:

    1. Use it as downside-protection to justify their strongly-held bad opinions, or
    2. Struggle to shift from one strong opinion to another.

    The reason it is difficult is because it works against the grain of the human mind.

    So don’t bother. The next time you find yourself making a judgment, don’t invoke ‘strong opinions, weakly held’. Instead, ask: “how much are you willing to bet on that?” Doing so will jolt people into the types of thinking you want to encourage.

    Whether you actually put money down is besides the point; whichever way you approach it, it’s still a heck of a lot easier than vacillating between multiple strong opinions.

    See also: The Forecasting Series, A Personal Epistemology of Practice.

    Update: Brad Feld has a 2019 blog post throwing shade on the technique; Michael Natkin over at Glowforge has also had the same experience I had — and, like me, has had better success with the 'probability as statement of confidence' technique.

    Originally published , last updated .

    This article is part of the Operations topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→

    Member Comments