Ideas, Time Allocation

The Dangers of Treating Ideas from Finance as Generalised Self Help

Feature image for The Dangers of Treating Ideas from Finance as Generalised Self Help

I thought it would be a little prudent — given that I’ve been writing about time allocation as capital allocation recently — to recap the dangers of reading too much into ideas drawn from the field of finance.

What do I mean by this? Well, see if you recognise any of the following ideas, which trace their roots to the art of investing:

  • You should judge a decision by the process with which it was made, instead of the outcome.
  • Our thinking is horribly riddled with cognitive biases. Good judgment should take such biases into account.
  • The best way to make a decision is by explicitly analysing events using a latticework of mental models.
  • To make better calibrated decisions, state beliefs using probabilistic values. Update those probabilities as you encounter new events.
  • When making judgments or forming opinions, rigorous analysis is key.

Contrast those ideas with the following:

  • You should judge a decision by its outcome, adjusting your approach until you gain the outcome you desire.
  • All human expertise depends on efficient heuristics. Cognitive biases are what you get when your brain’s heuristics aren’t deployed in service of expertise (or if you are in a domain where expertise isn’t possible).
  • The best way to make a decision is to understand the way the world works. You observe each situation and simulate several courses of action, picking the first that looks like it might work.
  • To make better calibrated decisions, generate multiple causal narratives to pit against each other. Don’t bother with thinking probabilistically — our brains are not made for it.
  • When making judgments, or forming opinions, expertise is key.

The first set of ideas come from writers that inhabit the finance literature. They are people like Charlie Munger and Warren Buffett, Howard Marks, Nassim Nicholas Taleb and Shane Parrish. They are wise and successful.

The second set of ideas come from doctors and lawyers, businessmen and software programmers, soldiers and statesmen. They are people like Bill Gates, David and Charles Koch, John Boyd and Lee Kuan Yew. They are wise and successful.

The danger of reading only the ideas of the former group is to apply them to areas where the second group’s ideas work better. People read Munger and Taleb and think their advice applies to life in general. They do not pause to consider that their methods are built for the domains they live in.

Here’s a concrete example: when picking stocks, or playing poker, you shouldn’t judge a decision by its outcome. In these domains, it is entirely possible to make the ‘right’ move … and lose. After all, having an 80% chance of winning doesn’t mean you would end up in the 80% of universes where you win. You might very well find yourself in one of the 20% of universes in which you lose.

Your job is to preserve your ability to play while continually making the right moves. Over the long run, the odds will favour your pocketbook. But in order to do this, you cannot judge your decisions by their outcomes — instead, you have to ensure that each move you make is the ‘correct’ one (probabilistically speaking).

If you’re learning to manage people, however, it’s crazy to say that you shouldn’t judge a decision by its outcomes. If you’ve made a decision and then discover, six months later, that 50% of your team have quit and the rest are depressed and demotivated — well, tough luck. No amount of reasoning about ‘decision-making processes’ will save you from the fact that you’ve made a bad choice.

The right move in this latter situation is to ask yourself: “What might I do differently the next time a similar thing happens? What could I vary in order to achieve an outcome I desire?”

This approach is fundamentally different from that of the investors and intelligence analysts. It assumes that experience is valid and the world is learnable. It assumes that human misjudgment isn’t so terrible, and that we get things right most of the time.

In other words, it assumes that the world works just the way we intuitively assume it should — and that we can learn to make better decisions the same way a child learns to walk.

Regular and Irregular Domains

The reason finance writers publish such interesting books is because their methods are far removed from our intuitive model of the world. In other words, their methods are weird because their domains are weird. In a 2009 summary paper of their respective decision-making sub-fields, psychologists Daniel Kahneman and Gary Klein spell out the conditions required for expertise to exist. They discover that in order for expert intuition to work, the practitioner needs to inhabit a domain where:

  1. The environment is regular. That is, the situation must be sufficiently predictable, with observable causal cues.
  2. There must be ample opportunities to learn causal cues from the environment.

Political analysts and poker players live in a domain that fails requirement 1. Political events are not sufficiently predictable, and poker is a game with probabilistic properties that far outstrip the capabilities of casual human observation. You need a firm grasp of probability theory to play poker competently — or an ability to come up with probability theory from observation alone.

Stock-picking fails requirement 1 and 2. In the short term the market is not predictable. In the long term, the market is usually efficient, and there is little opportunity to learn from the instances where it isn’t. Good investors must therefore use weird methods to build expertise. They cannot judge the quality of their decisions by the outcome of any one decision. And they cannot judge their decisions over a short time frame.

Writing software, managing people, and dealing with clients are domains that pass both requirements. It’s possible to rely on expert intuition in these fields. Which in turn means that you should probably view explicit analysis in such fields with some suspicion.

In reality, of course, we live in a world that is a mix of both regular and irregular domains.

Delegating tasks to your subordinates is a regular decision. Picking a city to live in isn’t.

Reading people accurately is a regular decision. Choosing careers isn’t.

When learning techniques for judgment or decision making, it's probably worth it to ask yourself: is the author of this technique living in a regular or irregular domain? Methods developed for irregular domains are rarely as effective in regular ones, and vice versa.

Which leads us to my original implicit question, referenced at the start of this post: what does this have to do with Commonplace’s recent focus on viewing time allocation as capital allocation?

I think it's simple: time allocation is usually a process of analysis, because it usually deals with uncertain futures in irregular environments.

Applying the Kelly criterion to your time assumes that you are analysing your career moves as you would an investment — that is, a bet with an uncertain rate of return. Jacob Steinhardt’s information rate method was developed for highly speculative research projects. Both are applicable to the sorts of life decisions that one cannot build expertise for — that is, those decisions that fail on either the first or the second of Kahneman and Klein’s two requirements.

Alert readers would notice that the ideas in this post are a recap of ideas I've already covered in my framework for putting mental models to practice. But the second-order implications are still worth exploring.

Whenever you see a self-help method touted as a technique for better decision-making, pause to ask yourself: is this applicable in a regular or irregular domain? Is the author a practitioner in a regular or irregular domain?

And then: apply those ideas accordingly.