This is part of the Expertise Acceleration topic cluster.

Believability in Practice

Feature image for Believability in Practice

Table of Contents

Sign up for the Newsletter

    Once a week. Three links. No spam. Unsubscribe anytime.

    A couple of years ago I wrote about Ray Dalio’s believability metric, which he first articulated in Principles (the hardcover edition of the book was published in 2017, but my original experience with the book was a crude PDF version, freely available on Bridgewater’s website circa 2011).

    It’s been a couple of years since I internalised and applied the idea; this essay contains some notes from practice.

    Believability, A Recap

    The actual technique that Dalio proposed is really simple:

    ‘Believable’ people are people who have 1) a record of at least three relevant successes and 2) have great explanations of their approach when probed.

    You may evaluate a person's believability on a particular subject by applying this heuristic. Then, when you’re interacting with them:

    1. If you’re talking to a more believable person, suppress your instinct to debate and instead ask questions to understand their approach. This is far more effective in getting to the truth than wasting time debating.
    2. You’re only allowed to debate someone who has roughly equal believability compared to you.
    3. If you’re dealing with someone with lower believability, spend the minimum amount of time to see if they have objections that you’d not considered before. Otherwise, don’t spend that much time on them.

    The technique mostly works as a filter for actionable truths. It’s particularly handy if you want to get good at things like writing or marketing, org design or investing, hiring or sales — that is, things that you can do. It’s less useful for getting at other kinds of truth.

    I started putting believability to practice around 2017, but I think I only really internalised it around 2018 or so. The concept has been remarkably useful over the past four years; I’ve used it as a way to get better advice from better-selected people, as well as to identify books that are more likely to help me acquire the skills I need. (Another way of saying this is that it allowed me to ignore advice and dismiss books, which is just as important when your goal is to get good at something in a hurry.)

    I attribute much of my effectiveness to it.

    I’m starting to realise, though, that some of the nuances in this technique are perhaps not obvious — I learnt this when I started sending my summary of believability to folks, who grokked the concept but then didn’t seem to apply it the way I thought they would. This essay is about some of these second-order implications when you’ve put the idea to practice for a longer period of time.

    Why Does Believability Work?

    Believability works for two reasons: a common-sense one, and a more interesting, less obvious one.

    The common-sense reasoning is pretty obvious: when you want advice for practical skills, you should talk to people who have those skills. For instance, if you want advice on swimming, you don’t go to someone who has never swum before, you go to an accomplished swimmer instead. For some reason we seem to forget this when we talk about more abstract skills like marketing or investing or business.

    The two requirements for believability makes more sense when seen in this light: many domains in life are more probabilistic than swimming, so you’ll want at least three successes to rule out luck. You’ll also want people to have ‘great explanations’ when you probe them because otherwise they won’t be of much help to you.

    The more interesting, less obvious reason that believability works is because reality has a surprising amount of detail. I’m quoting from a famous article by John Salvatier, which you should read in its entirety. Salvatier opens with a story about building stairs, and then writes:

    It’s tempting to think ‘So what?’ and dismiss these details as incidental or specific to stair carpentry. And they are specific to stair carpentry; that’s what makes them details. But the existence of a surprising number of meaningful details is not specific to stairs. Surprising detail is a near universal property of getting up close and personal with reality.

    You can see this everywhere if you look. For example, you’ve probably had the experience of doing something for the first time, maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were. Then you got more practice and then you told yourself ‘man, it was so simple all along, I don’t know why I had so much trouble’. We run into a fundamental property of the universe and mistake it for a personal failing.

    If you’re a programmer, you might think that the fiddliness of programming is a special feature of programming, but really it’s that everything is fiddly, but you only notice the fiddliness when you’re new, and in programming you do new things more often.

    You might think the fiddly detailiness of things is limited to human centric domains, and that physics itself is simple and elegant. That’s true in some sense – the physical laws themselves tend to be quite simple – but the manifestation of those laws is often complex and counterintuitive.

    The point that Salvatier makes is that everything is more complex and fiddly than you think. At the end of the piece, Salvatier argues that if you’re not aware of this fact, it’s likely you’ll miss out on some obvious cue in the environment that will then cause you — and other novices — to get stuck.

    Why does this matter? Well, it matters once you consider the fact that practical advice has to account for all of this fiddliness — but in a roundabout way: good practical advice nearly never provides an exhaustive description of all the fiddliness you will experience. It can’t: it would make the advice too long-winded. Instead, good practical advice will tend to focus on the salient features of the skill or the domain, but in a way that will make the fiddliness of reality tractable.

    In practice, how this often feels like is something like “Ahh, I didn’t get why the advice was phrased that way, but I see now. Ok.”

    Think about what this means, though. It means that you cannot tell the difference between advice from a believable person and advice from a non-believable person from examination of the advice alone. To a novice, advice from a non-believable person will seem just as logical and as reasonable as advice from a more believable person, except for the fact that it will not work. And the reason it will not work (or that it will work less well) is that advice from less believable individuals will either focus on the wrong set of fiddly details, or fail to account for some of the fiddliness of reality.

    To put this another way, when you hear the words “I don’t see why X can’t work …” from a person who isn’t yet believable in that domain, alarm bells should go off in your head. This person has not tested their ideas against reality, and — worse — they are not likely to know which set of fiddly details are important to account for.

    Why Does This Matter?

    At this point it’s perhaps useful to take a step back and reiterate why I take this idea so seriously.

    I’ve just given you an explanation of the differences in advice from believable people and non-believable people. I have not named any names, because I don’t want to call people out in this piece. But consider, for a moment, the kinds of experiences I must have had to be able to tell you about such differences.

    If you take advice from non-believable people, you’ll often waste months chasing after something with the wrong frame. Practice takes time. Mastery takes years. You don’t want to burn months needlessly if you don’t have to.

    I take believability seriously because I’ve burnt months in the past in the pursuit of suboptimal advice. I am now ruthless when it comes to evaluating advice, because I do not want to burn months in the future.

    One way I think about this is that the most dangerous advice from non-believable people tend to be ‘perfectly rational, logically constructed, and not really wrong — but not as useful or as powerful as some other framing’. The danger isn’t that you receive advice that just appears to be stupid, or wrong — if that were the case, you would simply reject it. No, the danger is that you receive advice that slows you down — while appearing perfectly reasonable on the surface.

    Ad Hominem

    An obvious objection to believability is that it is a form of ad hominem. Per Wikipedia, ad hominem is a ‘rhetorical strategy where the speaker attacks the character, motive, or some other attribute of the person making an argument rather than attacking the substance of the argument itself.’

    But the Wikipedia page also includes this rejoinder:

    Valid ad hominem arguments occur in informal logic, where the person making the argument relies on arguments from authority such as testimony, expertise, or on a selective presentation of information supporting the position they are advocating. In this case, counter-arguments may be made that the target is dishonest, lacks the claimed expertise, or has a conflict of interest.

    In practice, believability has interesting properties that pure ad-hominem doesn’t. For instance, there have been multiple instances where — having determined that a junior person was more believable than I was in some sub-skill like copywriting or software design — I just shut up and set aside my views to listen. I’ve also shushed more senior people, when it became clear to me they were less believable in that particular sub-skill than the junior employee.

    Survivorship Bias

    The next most common objection to believability is ‘how about survivorship bias?’ The reasoning goes something like this: if you focus on those with at least three successes, you may well include individuals who have succeeded by luck. Another way to argue this is to ask: what about all those who have attempted to put believable advice to practice, but failed?

    As with the earlier analogy on swimming advice, there is a useful thought exercise that I like to use, to remind myself of the nuances of the technique in practice. Let’s imagine that you and I are recreational Judo players, and you see me successfully perform a complicated throw on at least three separate occasions, against three different opponents. Do you write this off as survivorship bias? Probably not. In fact, you might ask me to teach you the throw — and I would gladly do so, though I might warn you that it took me three years to learn to use it with any amount of success. (This is quite common, especially with more complicated techniques in Judo).

    But now let’s say that you run some analysis on international Judo competition, and you find out that — unsurprisingly! — the throw is rarely used at the highest levels of competition. The few competitors who use it do so sparingly. How does this change your desire to learn the technique?

    This analogy captures many properties of skill acquisition:

    1. Just because a practitioner is able to use a technique doesn’t mean that you would also be able to — for instance, perhaps the practitioner has longer limbs, or a longer torso, that make the technique impossible to do for any other body shape. (In marketing, investing, or business, the equivalent analogy here is that certain approaches demand certain additional properties to work — for instance, you might need a certain temperament if you are doing certain styles of investing, or you might require full support from company leadership if you are running certain types of go-to-market playbooks. Or perhaps the approach you take requires the practitioner to have an equivalently high level of skill at some unrelated domain — say, organisational politics, or org design; these serve as confounding variables that affect one’s ability to copy the approach and have it work in your specific context).
    2. Just because a practitioner is able to use the technique in one context (the club level) doesn’t mean that they would be able to do so in high-level competition. This is particularly true in sports like Judo, but is broadly true elsewhere — certain techniques or approaches are harder to learn, with lower probability of success, and so vanish from the highest levels of competition where evolutionary pressures are the strongest. (The business analogy here is that certain approaches work for less-competitive industries, or for certain periods of time, but do not work in more competitive industries or at the end of a cycle).
    3. This does not mean that you cannot learn something from a believable practitioner — it simply means that proof of believability is not a guarantee of technique success.

    So how do you take these properties into account? Over the past couple of years, I’ve settled on two approaches:

    First, I think of believability as a more powerful negative filter than it is a positive one — in other words, it is more rigorous when removing individuals, books, or advice from consideration. You should pay the bare minimum of attention to less believable advice, and then look elsewhere.

    On the flip side, if you get advice backed by a track record of at least three successes, you should probably treat it as an existence proof that the advice has worked and that the believable person has something to teach you; what you shouldn’t do is to treat it as ‘the truth’, since you still need to verify if it works for you in practice.

    Second, I get very granular when it comes to evaluating believability. I’ve alluded to this in the example about copywriting, above — I am perfectly happy to shut up and learn from a more junior person if they are more believable than I am in a particular sub-skill. Consequently, I’ve found it useful to evaluate believability at the level of the specific advice that’s given — which might often mean that I take two or three pieces of advice seriously in a conversation, but then discard the rest, depending on the person’s believability for each specific piece of advice.

    A good example of this is with finance professionals — I am not believable when it comes to investing, so I immediately shut up and listen whenever they talk about finance; on the other hand, I tend to ignore nearly everything that finance folk say about org design. From experience, professionals in smaller funds tend not to have grappled with org design problems the way business operators of equivalent org sizes must (mostly because small-to-mid-size businesses tend to be more operationally complex than small-to-mid-size funds).

    My point: selection bias is less of a thing when you’re evaluating specific advice against the precise context and the context-dependent believability of the person in question, especially when you approach things with the assumption that practice is the bar for truth. And even with things like ‘org design’ I am very aware of the limits of my believability — I’ve never run orgs larger than 50 people, for instance, and even then only successfully in certain contexts; I mostly shut up and listen whenever I meet someone who has scaled larger orgs, or if they have run equivalently-sized orgs in more operationally challenging domains.

    Suppress Your Views

    The most difficult part about putting believability to practice is actually in the first third of Dalio’s protocol.

    If you’re talking to a more believable person, suppress your instinct to debate and instead ask questions to understand their approach. This is far more effective in getting to the truth than wasting time debating.

    It turns out that the ‘suppress your instinct to debate’ is relatively easy; the hard part is in mentally suppressing your existing models of the domain. To be more precise, the difficult part is that you are not allowed to interpret the expert’s arguments through the lens of what you already know.

    One tricky scenario that you might find yourself in is talking to someone who is demonstrably more believable than you in a particular sub-skill, but then realising that what they say clashes with your existing mental models of the domain.

    A true commitment to believability means that you have to take them seriously. It also means that you have to ignore what you think — because it is likely that you’re missing certain cues that only they can see. Dalio’s protocol implicitly calls for you to grok the more believable person’s worldview — to treat it with respect, to question it and internalise its logic; you may construct actionable tests for the argument later.

    This sounds relatively easy to do in theory. In practice, I’ve found it to be remarkably difficult. What I’ve seen most people do when put in this situation is that they will take whatever the more believable person says, and then paraphrase it using a frame that they currently understand. But this is stupid. You’re not here to interpret the expert’s advice through your existing mental models — in fact, you’re wasting time since the odds are good that your mental models are subtly mistaken in the first place. Instead, you should set aside your own models in an attempt to grapple with the advice on its own terms, using the more believable person’s own words, in an attempt to see the world as the more believable person sees it.

    I’m afraid that I don’t have good examples for you here. The most pernicious instances of this behaviour occur when the practitioner is a journeyman in some domain, and the higher-believability person is more than a few rungs ahead in a particular branch of the skill-tree. This in turn means that any detailed example I give you will require you to be familiar with the skill domain itself. But, if I may describe what I’ve noticed in such scenarios: what often happens is that the more believable person will say something with subtle implications, and the less believable person will say “ahh, yes, and —” and then proceed to articulate something that is clearly a function of their current mental model of the domain, in a way that misses the subtle difference.

    A crude, no-good example (that won’t make sense unless you have some experience in marketing): if you put two believable marketers together, one from a performance marketing background and one from a brand marketing background, what you’ll often find is the performance marketer will completely miss out on the parts of the brand marketer’s skill where the brand marketer is able to manipulate low-level levers in a consumer’s psychology.

    It’s not enough to say (to the performance marketer) “you must understand the customer to do brand marketing well” — the performance marketer will think that they already understand the customer, since all marketers use some model of the customer’s journey in their work. What the performance marketer might not notice is that there are depths to the ideal customer’s psychology that the brand marketer is able to plumb (especially with regard to the brand and the brand narrative) and that these depths will be subtle and sound rather ridiculous when articulated. Unless the performance marketer is willing to set aside what they currently know about marketing to listen carefully, they will likely be blind to much of these nuances.

    In practice what’s more likely to happen is that the performance marketer will say, of the brand marketer, “god, I can’t stand them, they’re all full of hot air”, and so will be completely useless with brand marketing for years and years afterwards.

    I’m not sure if that example made any sense to you, but if it did — well, this sort of interaction is exactly what is difficult about putting believability to practice.

    Triangulate Truths

    Here’s a more interesting question, though: what happens when two equally believable people have different approaches to a particular domain? That is, you go to two different people and they give you conflicting advice on some decision, and you’re confused as to what to do.

    We can state this question slightly differently: what happens if a believable practitioner (say, an Olympic medallist) says one thing, but a believable coach (a person with a track record of producing Olympic athletes) says another?

    Or what if you read about two savant investors who have totally different approaches to equity investing?

    How do you evaluate the competing approaches?

    At this point in the essay, it’s actually easier to reason about this — each piece of advice you get comes from a particular context, and the successes in the person’s history serves as an existence proof for the advice. Therefore: hold each approach equally with a loose grip, but test the one with the more accessible explanation first.

    (Or really, test whichever one appeals to you first — it doesn’t actually matter that much so long as you’re willing to act).

    The intuition here is that each believable person is likely to see parts of the skill domain that you can’t, and so advice that appears conflicting to you as a novice very often turns out to be differing interpretations of the same underlying principles once you’ve climbed a bit further up the skill tree.

    Another implication: you can use the differences in advice from equally believable people as a way to triangulate on those base principles. This is similar to what I’ve argued in To Get Good, Go After The Metagame — although in that piece, the primary idea I exploit is the observation that experts operating at some competitive frontier may serve as a north star for self-learning.

    My friend Lesley has an interesting argument when it comes to evaluating coach vs athlete: she says that while both may be equivalently believable (the coach has a track record of producing winning teams; the player has a track record of winning); you really do want to pay attention to the person with the better explanation. The way she puts it: “A 2x champion might be a better coach than a 10x champion (or a famous coach) simply because they are better at synthesis and communication.”

    Which probably explains why Dalio included the clause that believable people should have ‘a credible explanation when probed’.

    I think Lesley is broadly right. If in doubt, test the more accessible explanation first.

    But the important thing, again: you need to test ideas, not argue about them. The purpose of the believability criterion is to get better faster. I hope this helps you do just that.

    Originally published , last updated .

    This article is part of the Expertise Acceleration topic cluster. Read more from this topic here→

    Member Comments