Using Inversion

Feature image for Using Inversion

Table of Contents

Sign up for the Newsletter

    Once a week. Three links. No spam. Unsubscribe anytime.

    Investor and businessman Charlie Munger is fairly well known for his quote on inversion: “All I want to know is where I'm going to die so I'll never go there.” Munger has made a large number of quips to this effect over the past few decades — for instance, he once said “If you were hired by the World Bank to help India, it would be very helpful to determine the three best ways to increase man-years of misery in India — and, then, turn around and avoid those ways.”

    The origination of the quote is older than Munger himself, of course. As far as I can tell, the best source for the idea is Munger’s 1986 Harvard School commencement speech, where he attributed the saying to a ‘rustic’. And the premise of that speech was to take Johnny Carson’s ‘prescriptions for guaranteed misery in life’ and to deliver it with a Mungerian twist:

    My final prescription to you for a life of fuzzy thinking and infelicity is to ignore a story they told me when I was very young about a rustic who said: “I wish I knew where I was going to die, and then I'd never go there.” Most people smile (as you did) at the rustic's ignorance and ignore his basic wisdom. If my experience is any guide, the rustic's approach is to be avoided at all cost by someone bent on misery. To help fail you should discount as mere quirk, with no useful message, the method of the rustic, which is the same one used in Carson's speech.

    What Carson did was to approach the study of how to create X by turning the question backward, that is, by studying how to create non-X. The great algebraist, Jacobi, had exactly the same approach as Carson and was known for his constant repetition of one phrase: “Invert, always invert.” It is in the nature of things, as Jacobi knew, that many hard problems are best solved only when they are addressed backward. For instance, when almost everyone else was trying to revise the electromagnetic laws of Maxwell to be consistent with the motion laws of Newton, Einstein discovered special relativity as he made a 180 degree turn and revised Newton's laws to fit Maxwell's.

    I find the notion of inversion (whether it comes from Munger, or Carson, or Jacobi, or the ‘rustic’) to be particularly elegant. And I think many others feel the same attraction: there are books and blog posts dedicated to inversion as a self-help idea, or as a tool for stock market analysis.

    Munger often wields inversion like a knife — a way to cut a problem down to size. And it's clear that the two heads of Berkshire Hathaway share this approach; here’s Warren Buffett, for instance: “When we look at businesses, we try to look at businesses that are good businesses today and think about what can go wrong. We think of business risk in terms of what can happen five, 10 or 15 years from now that will destroy, modify or reduce the economic strengths we believe currently exist in the business. And for some businesses, that's impossible to figure — at least it's impossible for us to figure — and we don't even think about it. If we can think of very much that can go wrong, we just forget it.”

    So that’s all fine when it comes to equity analysis. But how best to apply inversion to the average career?

    Picking People To Work For

    The most practical use I've found for inversion is to use it as a way to evaluate the people I want to work for.

    I’ve mentioned previously that I’ve over-optimised my career for startups, and that my approach to evaluating potential startup employers is to ask lots of disqualifying questions.

    The reasoning for that approach is simple. When you want to work at a startup, you can’t really rely on an industry grapevine to evaluate the role, since startups are usually too small to corral good information about. So you need to develop evaluation techniques of your own. The main technique I use is something I called a ‘head-fake question’ — that is, a question where I’m pretending to ask about something, but am actually evaluating something else in the answer. This makes it difficult for the startup to lie.  I then wrote that one's job is to build a collection of good ‘head-fake questions’ over the course of a career … and that you need a good understanding of your industry in order to do so.

    Again, the goal isn’t to pick a startup that is likely to succeed; that is too difficult. The goal is to do something simpler: to recognise that all startups are shit-shows on the inside — so your best bet is to pick a startup that is shitty in ways that you can tolerate. More precisely, you want to know that the shit you’ll have to deal with wouldn’t interfere with (or would give you exactly) the skills or experiences that you want to acquire.

    I gave a concrete example of a head-fake question in that essay:

    When interviewing with a sales-driven B2B product company, one of my most important vetting questions is about the tension between product and sales. My question usually takes on the form of: “In my experience, some clients demand extensive customisations. How do you balance client customisations that you need to do to close sales deals, against spending engineering time on features that are more generally useful for the rest of your users?”

    My interviewer will usually see this as a question about the company’s practices or processes. But what I’m actually trying to find out is the amount of power the product organisation has in relation to the sales organisation. A healthy tension between the two — expressed in a give-or-take relationship — is a good sign that a lot of things are going right in the company. Conversely, a company where product consistently overrules sales considerations, or where product is overruled by excessive client customisations in the service of closing is a highly dysfunctional company to work in.

    The important thing to note here is that disqualifying questions aren’t a perfect filter. If a company passes all your head-fake questions, there’s still a possibility that the company is shit in ways that you cannot predict. In other words, disqualification sticks — if a company fails your screens, it’s pretty much guaranteed to suck — but failure to disqualify isn’t an indication of goodness. You still have to make a bet when you’re joining a new company.

    Recently, I was talking with a friend who had done well for himself in a FAANG corp, and he told me he used this exact technique to evaluate potential managers.

    “I think most people can get to Level 5 based on merit — you just have to work hard and prove your worth.” he said, “But it’s hard to get to Level 6 (staff engineer) because it depends on cross-org impact. Which means you have to get lucky. You have to read org politics correctly, pick the right team, get the right projects, work under the right manager.”

    My friend told me that he was parked under an unambitious manager earlier in his career. But his team orchestrated a move to a different department. “I wasn’t sure that the other manager was good. But he didn’t fail any of my tests, whereas our current manager did. So we decided, together, to try and get that manager to transfer us over.”

    It turned out to be the right move. Over the next few quarters, my friend’s team was given increasingly important technical work — work that affected multiple organisations in the company. He was promoted to staff engineer a year or so after the transfer.

    As the conversation wore on, it became clear that he had applied the same idea when evaluating local companies: “I met with the VP Engineering of <local startup>. I asked him ‘How often are your performance reviews?’ He said once a year. Then I asked him ‘How many times do you do promotions?’ He said twice a year. Then I said ‘If you do perf evaluations only once a year and promote twice a year, how does that work?’ And he said he didn’t know; he hadn’t thought about it.”

    We both agreed that we probably didn’t want to work for this startup. A VP of Engineering who didn’t think critically about his org design was a bad sign — if he didn’t think about simple things like the mismatch between perf and promotions, what else did he get wrong? And it was doubly bad that he was embedded in a growing company — growth tends to break org structures, in a way that must be patched once every few months.

    I remember another episode, this time with a venture-capitalist-turned-operator, back when I had just started in Vietnam. (It was this episode that taught me that being a investor is hard, and being an operator is hard; therefore being a good investor and a good operator is so difficult that barely anyone below the age of 35 is good at both.)

    The VC sat me down, and proceeded to tell me all the amazing things they were building at his startup studio. He told me I should join him. He told me I was wasting my time with my current company. “You don’t make very good career decisions.” he said, “And we need operators here. Help me help you, ok?”

    I thought for a moment. “So what’s the biggest challenge with these projects?” I asked, gesturing at the half dozen or so mini-startups he had described over the past 30 minutes.

    “Oh, hiring. It’s difficult to hire good product people.”

    “No, I mean, what’s the most likely way these projects would fail? Like take the HR performance product you described to me. How might that fail?”

    “Like I said, hiring.” he repeated, impatiently, “Look, we can hire engineering talent easily. My guys are great at that. We just don’t have people with a good nose for product.”

    I was young and stupid then, but I wasn’t that stupid. I had spent enough time working at startups throughout university to know a) you don’t need as high a bar for product in B2B companies, so this was unlikely to be the biggest risk for his project; b) good operators always know the various ways their efforts are at most risk of failing — and that risk usually wasn’t about hiring; hiring was a problem that everyone faced.

    In other words, I had asked him a normal operating question, and he had disqualified himself with his answer. He was likely a bad operator. A couple of years later, his startup studio fell apart, and sold itself piecemeal to larger players in the country. (But of course today he goes around saying he had ‘exited’ his company; this is the way such things work).

    The Effectiveness of Negative Filters

    As you can probably tell by now, the form of inversion that I’ve found most useful in my career is what many would call a ‘negative filter’, or a ‘negative screen’. This is what my friend and I were using in the examples above.

    Negative filters aren’t rare by any means. I think they’re fairly common-sensical, and reasonably widespread. Here’s O’Shaughnessy Asset Management, for instance, in a paper titled Stocks You Shouldn’t Own:

    “Quality” means different things to different investors. We use quality as a negative screen: to avoid stocks rather than to select them. Specifically, we find that factors that measure Financial Strength, Earnings Growth, and Earnings Quality are the most effective ways to objectively remove stocks from consideration for investment.

    I’ve also mentioned in the past that debugging ability is a similar filter: you want to disqualify software engineering candidates based on how badly they perform in a debugging test.

    Again, the core insight is that companies that perform well on measures of ‘quality’ might not make for a great stock, but any company that fails a ‘quality’ bar is almost certainly a bad investment. Similarly, programmers who are good at debugging might not be great programmers, but any programmer who is bad at debugging is almost certainly bad at programming.

    Why do negative filters work so well? I’m not really sure. Munger sidesteps this question by saying it’s in the ‘nature of things’; the mathematician Carl Jacobi was known for telling students that when looking for a research topic, one should ‘invert, always invert’ ('man muss immer umkehren’) — which came from his belief (not a fact!) that inverting known results could open up new fields for research.

    Perhaps the most compelling argument I’ve found for the effectiveness of negative filters is Karl Popper’s notion of falsification. The intuition for Popper’s idea is that disconfirmation is always more rigorous than confirmation: ‘all swans are white’ would require a survey of all swans on the planet, which is very difficult, but a single observation of a black swan is sufficient to falsify the statement. In this manner, disconfirmation can be said to be more rigorous than confirmation, because it only takes one observation to disprove a statement, whereas it takes many observations to prove that something is true.

    Of course, Popper takes this single idea and then expands it to a great many things, even proposing that anything that isn’t falsifiable isn’t scientific. He also argues that there are good psychological reasons to seek out disconfirming evidence (as opposed to just confirming examples):

    The discovery of instances which confirm a theory means very little if we have not tried, and failed, to discover refutations. For if we are uncritical we shall always find what we want: we shall look for, and find, confirmation, and we shall look away from, and not see, whatever might be dangerous to our pet theories. In this way it is only too easy to obtain what appears to be overwhelming evidence in favour of a theory which, if approached critically, would have been refuted.

    Whether or not Popper’s notion of falsification has some bearing on the effectiveness of negative filters is something that lies outside the scope of this essay. I’d like to think it is. But it probably doesn’t matter: I’ve found the idea repeatedly useful in my career, and I’ve seen negative filters being used again and again by people I respect. I think it is a worthy tool to keep in your cognitive quiver — just in case.

    Originally published , last updated .

    Member Comments