A few days ago, Abu Amin asked me (via Twitter) how I do my personal experiments.
I don’t think I’m world class at personal experimentation, but I thought it might be useful to lay out how I approach my bets. This also seems relevant given that much of what I write about here is built on the assumption of using practice as a bar for truth.
As with most such things, I think the basics are simple and a little obvious. It’s the execution where it gets a little difficult.
Good Personal Experimentation Requires Good Self Evaluation
The core of principle in my approach is pick a good evaluation metric, and be honest with yourself. Both things are easier to say than to do.
Pick a good evaluation metric — Personal experiments are worth doing if they truly help you get better. The best way to tell if you’ve gotten better is to pick an evaluation metric before you begin, so you can limit the amount of post-hoc rationalisation (“but I feel better!”) that you may do.
A good evaluation metric has two properties:
- It makes it easy to evaluate improvement.
- It makes it difficult to lie to yourself (we’ll talk more about this in a bit).
For example, if your goal is to ‘improve critical thinking’ and your personal experiment is ‘read Sapiens and try to be really critical about it’, you can’t know if you’ve achieved your goal by the time you’re done with the book.
A slightly better evaluation would be ‘develop and then write three arguments against the ideas in Sapiens’. This does two things: a) it produces a concrete piece of output that may be evaluated by external parties (and, more importantly, by your future self), b) it forces you to do tangible work.
This second evaluation also happens to fulfils our two requirements for a good evaluation metric:
- It makes it easier to evaluate improvement — you’ve produced a written artefact that you may refer to in the future.
- It makes it more difficult to lie to yourself — written thoughts are more easily scrutinised than unspoken ones.
One common gotcha that I’ve seen is to think that a ‘good’ evaluation metric is a quantitative evaluation metric. While it is true that improvement on a quantitative metric is easier to see than with a qualitative one, it’s too much of a limitation to stick to quantitative measures alone. (As the old saying goes, “not everything that counts can be counted, and not everything that can be counted counts.”)
For instance, with the Sapiens example, you may come back to your written criticism of Sapiens after a year or two, only to find that your old arguments were unsophisticated. The disgust that you feel is a sign that you have improved in your critical thinking. While this isn’t a particularly quantifiable metric, it does give you an indication of improvement. And that’s all we need.
Be honest with yourself — Being honest with yourself is a psychological frame, not necessarily a technique. One of my weaknesses is that I’m easily smitten with a new tool or idea — to the point where I overhype to my friends. “Of course it’s effective,” I hear myself telling people, “I think this will be a game changer!”
What’s worked for me is to have very skeptical friends. I overhype some new technique or tool or idea to them, and watch as their eyes roll backwards into the back of their heads. After many iterations of this, I can now simulate their voices in my mind (usually as I’m talking excitedly about the idea): “Yes, Cedric, but have you considered …” — these voices keep me real.
As with most myside-bias type things, it’s easier to see this with other people than it is to see it within yourself. As I write this, there are many people on Twitter who have bought into the benefits of good note-taking — some are adherents of Roam Research, others swear by Evernote, or Obsidian or Notion. They speak of their note-taking tools as if it is the answer to all of their life problems, because the experience of using such tools often feel better than whatever they were doing before. My take is that it is worth it to watch their outputs to see if the adoption has made concrete improvements to their work, instead of relying on the positive feelings that come in the first few months of any new practice. I am hopeful that some of these note-taking claims are true — but it’ll take a bit of time to separate valid improvements from initial hype.
Of course, core principles like ‘pick a good evaluation target’ and ‘don’t lie to yourself’ are only useful up to a certain point. There are other problems that you’ll inevitably stumble into when you’re executing.
When Debugging Techniques, Go From Most Useful Hypothesis to Least Useful
Sometimes the self improvement techniques that you try don’t work. For instance, let’s say that you’ve read Deep Work, and you’re trying to apply Cal Newport’s method of ‘taking breaks from focus’. You find this particularly difficult to implement. You’ve failed a lot over the past few weeks. What do you do?
Most people would come up with a number of possible hypotheses for their failure. They would guess:
- Perhaps there’s something in Newport’s environment that makes it easier for him to execute this technique.
- Perhaps Newport has better genetics, that allows him to concentrate better.
- Perhaps Newport only did this for a period of time (perhaps while he was pursuing tenure, or testing ideas for this book?); and perhaps it is something that he’s since given up.
- Perhaps Newport has had a long history practicing easier forms of ‘taking breaks from focus’, and it would do to uncover what those forms are.
This is actually a fairly common experience. Most self-improvement techniques must be adapted to your personal circumstances — your preferences, career goals, workplace and personality — in order to work. Therefore, you would need tease out the variations that might work for you during personal experimentation.
To do this, it is very important to pick the most useful hypotheses first and test that, instead of picking what might be most ‘true’. For instance, saying that ‘Newport has better genetics’ is a perfectly valid and possibly true interpretation of things, but it is also less useful: it doesn’t suggest new variations on the technique to try. Therefore, leave it to the very end before you conclude that Newport’s technique works because he is innately better at concentrating than you are.
I should note that this technique is a particular instance of ‘optimising for usefulness’; there are many other applications of this principle that you may discover for yourself.
Work Out the Implications of the Principles Behind the Technique
When implementing a technique, it’s often useful to take a step back and ask about the principles behind said technique.
I’ve written before that every actionable book is really two books inside: a book of techniques, and a book of principles.
When you are implementing techniques, you should spend some time working out the worldview and the principles that led the author to develop the aforementioned technique. Doing so makes your practice doubly effective: first, your implementation of the technique gives you some insight into the author’s thought processes, and second, your new-found appreciation of those thought processes gives you better insight into the entire collection of techniques that the author is trying to teach.
Here’s an example: in High Output Management, Andy Grove talks about a principle of production that he describes as ‘you should fix any problem at the lowest-value stage possible’.
To articulate this, Grove establishes that all forms of work take in raw inputs on one end, and produce higher-value work out the other. For instance, Intel factories take silicon, labour, and fabrication tech on one end, and produce chip wafers out the other; managers take information on one end, and produce decisions out the other. Grove writes:
A common rule we should always try to heed is to detect and fix any problem in a production process at the lowest-value stage possible. Thus, we should find and reject the rotten egg as it’s being delivered from our supplier rather than permitting the customer to find it. Likewise, if we can decide that we don’t want a college candidate at the time of the campus interview rather than during the course of a plant visit, we save the cost of the trip and the time of both the candidate and the interviewers. And we should also try to find any performance problem at the time of the unit test of the pieces that make up a compiler rather than in the course of the test of the final product itself.
As you put the techniques in HOM to practice, however, you will quickly realise that this principle undergirds many of the techniques that Grove teaches in his book. For instance:
- Why hold 1-on-1 meetings with your subordinates? Because doing so will uncover potential problems in your team, which allows you to fix them before they blow up.
- Why should you have managers from different parts of the company perform random site visits to other parts of the company? Because doing so brings in a fresh pair of eyes to production processes, and might help uncover issues that are best resolved at the lowest possible level.
- Why curate your information sources? Because doing so will allow you to identify more problems from more sources earlier.
- Why identify pointless meetings and call them off early? Because the earlier you call off such meetings, the lower the ‘value-added stage’, and the more managerial resources you may conserve.
- And on it goes.
Where this becomes particularly interesting is when you have successfully internalised Grove’s principles. For instance, one implication of deeply internalising ‘fix problems at the lowest-value-stage possible’ is that you may now evaluate the management competence of any company by investigating its blowups — that is, work emergencies that lead to a scramble of employees and resources.
Many — though not all — blowups are a sign that the organisation did not catch a potential problem at a lower value-added stage. With Grove’s principle at hand, you now have a lens with which to evaluate existing managers (under whom the blowups occurred), and to develop new practices to reduce the occurrence of such emergencies.
Note that Grove doesn’t teach this lens explicitly — he merely articulates the principle, and then goes into a large number of techniques that are expressions of that principle in action. You have to understand how he applies the principle through experimentation, in order to wield the lens for yourself.
So what have we covered? We’ve covered a small handful of ideas that I’ve found useful when doing personal experiments:
- Pick an evaluation criteria before you begin.
- Don’t lie to yourself.
- When debugging techniques, go from most useful hypothesis to least.
- Work out the implications of the principles behind each technique.
Naturally, some experiments might span multiple weeks, as you debug the ideas and figure out how to adapt them to your life. It goes without saying that you should track your personal experiments in a note-taking app of some kind. You probably already have a notes app on your phone; record your experiments however you wish.
You’ll also notice that this is all incredibly time consuming. We shouldn’t be terribly surprised by this: techniques take some time to apply, and internalising principles take longer still. I’ve mentioned before that I am limited to testing two actionable ideas a week — given the content of this blog post, you now know why that is.