One of the more intriguing qualities of the most effective people I know is that they tend have an ability to think ‘at the right level of abstraction’. Not too high; not too low. Just right for the problem they’re solving.
Historically, I haven’t been very good at this.
When I first moved to Vietnam, I wrote a personal essay about betting on Asia. The piece was logically coherent and tightly argued, but it was also laughably wrong. I argued that Asia was the future. I argued that a young person of Asian descent would do better to learn how to build companies in Asia; that the problems of Asia were different from the problems of Silicon Valley. I argued that if you wanted to build a tech company under such constraints, an Asian startup should be focused on Asian problems: the larger, the more valuable, and the more overlooked by Silicon Valley, the better. I used manufacturing as a thought experiment.
But I was wrong.
It wasn’t that any of these points were necessarily mistaken — arguably, the most successful Asian startups were started by Asian people and built to solve Asian problems. (This is the problem with tightly argued generalities: they often contain truisms). The issue with my argument was that it was simply irrelevant — it didn’t matter if all of these things were true. It didn’t matter that the centre of global power was shifting East; it didn’t matter that venture capital didn’t give a shit about Asia in 2014 but changed its mind quickly; it didn’t matter that Asian unicorns would emerge over the course of the 2010s. I was too small to take advantage of any of these shifts. I was a young person moving into a job. I was thinking at the wrong level of abstraction.
Much later, I discovered an old nut by investors Warren Buffett and Charlie Munger, which they named the Noah Rule (after the biblical story): ‘predicting rain doesn’t count, building an ark does’. To hijack that aphorism a little, I was essentially gesticulating at all the storm clouds in the horizon, reporting on the shapes of the cumulonimbuses, telling everyone who would listen about the coming thunderstorm. But I had no carpentry skills to speak of; I couldn't take advantage of any of these analyses. It was a waste of time.
Reasoning at the Wrong Level
I occasionally talk to juniors from my alma mater. They tend to be interested in a conversation after learning that I write a blog about careers; on my end, I’m more than happy to talk to them so I can hear about the changes that have occurred in the School of Computing, which I loved.
One common pattern that emerges from those conversations are versions of the same reasoning that led me to write my Because Asia essay. The arguments tend to be made by incredibly smart, analytical people: my juniors would explain to me that venture capital in Asia is under-developed, or that Singapore is a terrible place to work in because of bad incentives for remote FAANG offices, or perhaps that Singapore will be caught between Great Power conflict, or perhaps its advantages in the first 50 years of its life no longer hold true for the next 50. They then conclude that perhaps it would be better to move out from Singapore, and seek their fortunes elsewhere.
The arguments tend to be well-articulated, intelligent analyses of macro-level trends. But I now suspect they are wasteful attempts at thinking. Like my essay about Asian startups, the arguments operate at the wrong level of abstraction — wrong, that is, for the decisions these people are to make.
It doesn’t matter if Singapore is geopolitically challenged, or that it faces new challenges in the 2020s or that VC money is skipping SG entirely and flowing into Indonesia. What matters for these students are questions at a lower level of abstraction: “What are my career goals, and which companies may I work at to advance towards those goals?” along with “What are the market/regulatory/economic shifts that are affecting those companies directly?” These questions operate at the right level of abstraction. Commentary about macro-economic trends do not.
Why? I think the answer is obvious when you think about it for a bit: macro-economic shifts trickle a long way down, through a complex system, before they become local trends. These shifts may or may not result in the outcomes you expect. So if you operate at a much lower level in the system, and you want to make a decision about your career, you should probably pay attention to the local trends first, without thinking about the macro-economic shifts that may or may not have led to them.
Another way of thinking about this is that proximate causes (the cause immediately preceding an event) are easier to reason about than remote causes (causes that are further away from the event at question, but that contribute through the chain of causality). Remote causes are often complex and intertwined with a huge number of factors (think: the Fed raising the interest rate, and affecting the whole economy in one fell swoop); proximate causes are simpler to think about. And so it is usually more effective to focus on proximate causes, because any change in the macro-environment that will affect you will likely show up as a proximate cause first. The way I like to think about this is that remote causes are akin to the interactions of water molecules in a vapour cloud — you’ll have to spend a huge amount of analytical power to figure out if they’ll ultimately affect your windows. The effective person won’t bother with any of that; they would instead watch for the movement of water droplets on glass.
I mentioned previously that I ran a software engineering office in Vietnam, and that none of the trends that I wrote about in my Asia essay ultimately mattered to my stint there. What ended up being important were the following shifts:
- Regulatory changes by the Singapore government, that affected our customers.
- What our customers did (these were mostly retail companies; so we paid attention to things that affected them — including what their landlords were doing.)
- What our competitors did.
- The shape of the labour market in Vietnam, where I had to hire from.
It's mistaken to think that analysis of these topics are always less time consuming than macro-level analysis — in reality, some investigations demanded about the same amount of time and energy that you might have expected from a study of geopolitical trends.
But they were two important differences:
- These topics sat at the ‘right’ level of abstraction — it was nearer to the business, and therefore nearer to the decisions that we had to make. This made them more useful.
- I couldn’t use any of these insights to sound smart in normal conversation. The intelligence was hard-won, but too specific to our industry to be interesting to most of my friends.
An Appreciation for Complex Adaptive Systems
There’s another explanation for the effect that I think is worth talking about.
Many of the systems we operate in are complex adaptive systems. They are:
- Complex, for they consist of many interweaving parts.
- Adaptive, for they consist of many agents that compete, cooperate, and adapt to each other.
- A system, for these interconnected elements may be said to be coherently organised.
Complex adaptive systems lie at the heart of a field of study we now call ‘complexity science’. The most approachable popular introduction to the field of complexity science is M. Mitchell Waldrop’s Complexity, which examines the founding of The Santa Fe Institute — a multi-disciplinary research organisation set up to study such systems.
The book explains complex adaptive systems as follows (forgive this long excerpt; Waldrop has a tendency to drag explanations out to make them more palatable):
(…) the economy is an example par excellence of what the Santa Fe Institute had come to call “complex adaptive systems.” In the natural world such systems included brains, immune systems, ecologies, cells, developing embryos, and ant colonies. In the human world they included cultural and social systems such as political parties or scientific communities. Once you learned how to recognize them, in fact, these systems were everywhere. But wherever you found them, said Holland, they all seemed to share certain crucial properties.
First, he said, each of these systems is a network of many “agents” acting in parallel. In a brain the agents are nerve cells, in an ecology the agents are species, in a cell the agents are organelles such as the nucleus and the mitochondria, in an embryo the agents are cells, and so on. In an economy, the agents might be individuals or households. Or if you were looking at business cycles, the agents might be firms. And if you were looking at international trade, the agents might even be whole nations. But regardless of how you define them, each agent finds itself in an environment produced by its interactions with the other agents in the system. It is constantly acting and reacting to what the other agents are doing. And because of that, essentially nothing in its environment is fixed.
Furthermore, said Holland, the control of a complex adaptive system tends to be highly dispersed. There is no master neuron in the brain, for example, nor is there any master cell within a developing embryo. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. This is true even in an economy. Ask any president trying to cope with a stubborn recession: no matter what Washington does to fiddle with interest rates and tax policy and the money supply, the overall behavior of the economy is still the result of myriad economic decisions made every day by millions of individual people.
Second, said Holland, a complex adaptive system has many levels of organization, with agents at any one level serving as the building blocks for agents at a higher level. A group of proteins, lipids, and nucleic acids will form a cell, a group of cells will form a tissue, a collection of tissues will form an organ, an association of organs will form a whole organism, and a group of organisms will form an ecosystem. In the brain, one group of neurons will form the speech centers, another the motor cortex, and still another the visual cortex. And in precisely the same way, a group of individual workers will compose a department, a group of departments will compose a division, and so on through companies, economic sectors, national economies, and finally the world economy.
“Furthermore, said Holland—and this was something he considered very important—complex adaptive systems are constantly revising and rearranging their building blocks as they gain experience. Succeeding generations of organisms will modify and rearrange their tissues through the process of evolution. The brain will continually strengthen or weaken myriad connections between its neurons as an individual learns from his or her encounters with the world. A firm will promote individuals who do well and (more rarely) will reshuffle its organizational chart for greater efficiency. Countries will make new trading agreements or realign themselves into whole new alliances.
At some deep, fundamental level, said Holland, all these processes of learning, evolution, and adaptation are the same. And one of the fundamental mechanisms of adaptation in any given system is this revision and recombination of the building blocks.
Third, he said, all complex adaptive systems anticipate the future. Obviously, this is no surprise to the economists. The anticipation of an “extended recession, for example, may lead individuals to defer buying a new car or taking an expensive vacation—thereby helping guarantee that the recession will be extended. The anticipation of an oil shortage can likewise send shock waves of buying and selling through the oil markets—whether or not the shortage ever comes to pass.
But in fact, said Holland, this business of anticipation and prediction goes far beyond issues of human foresight, or even consciousness. From bacteria on up, every living creature has an implicit prediction encoded in its genes: “In such and such an environment, the organism specified by this genetic blueprint is likely to do well.” Likewise, every creature with a brain has myriad implicit predictions encoded in what it has learned: “In situation ABC, action XYZ is likely to pay off.”
More generally, said Holland, every complex adaptive system is constantly making predictions based on its various internal models of the world—its implicit or explicit assumptions about the way things are out there. Furthermore, these models are much more than passive blueprints. They are active. Like subroutines in a computer program, they can come to life in a given situation and “execute,” producing behavior in the system. In fact, you can think of internal models as the building blocks of behavior. And like any other building blocks, they can be tested, refined, and rearranged as the system gains experience.
Finally, said Holland, complex adaptive systems typically have many niches, each one of which can be exploited by an agent adapted to fill that niche. Thus, the economic world has a place for computer programmers, plumbers, steel mills, and pet stores, just as the rain forest has a place for tree sloths and butterflies. Moreover, the very act of filling one niche opens up more niches—for new parasites, for new predators and prey, for new symbiotic partners. So new opportunities are always being created by the system. And that, in turn, means that it’s essentially meaningless to talk about a complex adaptive system being in equilibrium: the system can never get there. It is always unfolding, always in transition. In fact, if the system ever does reach equilibrium, it isn’t just stable. It’s dead.
To compress that a little, complex adaptive systems have the following properties, as laid out by John Holland:
- they are made up of agents that act in parallel.
- the properties of such systems emerge out of competition and collaboration amongst these agents.
- they tend to have many levels of organisation, with different properties and behaviours emerging at each level.
- all complex adaptive systems anticipate the future and act according to their predictions, and
- complex adaptive systems are dynamic, and generate new niches and interconnections between those niches over time.
The implications that fall out of these properties are endlessly fascinating. Here are three.
First: if you are an agent operating within the system, the amount of analysis needed to grok all the moving parts of the system is monumental — so monumental, in fact, that unless you are responsible for the entire system, you should probably second-guess any attempt to do so (ask yourself: is it really necessary to understand the whole system, given my goals? Most operators have no need to do so — beyond the rules of the levels immediately above and below their own).
Second: thanks to emergence, the rules that apply to one level of the system don’t apply to different levels: so, the rules that govern economic stimulus are different from the rules that govern business competition; the rules for effective learning emerge from but have little mapping to the rules of neuroplasticity.
Therefore, three: you probably shouldn’t try to draw conclusions from one level and have them apply to the next.
These implications explain why, for instance, great businesspeople don’t often make for good economic policy makers. Good businessmen may be able to predict, out-think and outcompete other businesses in their niches, but their mental models don’t map perfectly to the type of thinking needed to shape an economy.
Similarly, this is why the literature around habit-formation should probably be tossed out when talking about organisational culture or societal behaviour. Just because a set of ideas works for one level of a system doesn’t automatically make it relevant to the level of organisation above it. To make this more concrete: policymakers who think that the habit loop is sufficient for initiating behaviour change at the level of a group would likely be burned by Goodhart’s Law, or be taken by surprise by all the second and third order effects that ripple throughout the niches in a complex adaptive system. They need a different set of ideas to effect behavioural change at higher levels of organisation — ideas like incentive system design, not individual habit formation.
Finally, this insight lies at the heart of what I argued in Neuroplasticity is a Pretty Useless Idea for Practice:
I think this idea is broadly generalisable. For this post, however, I want to talk about a very specific instance of the principle: when attempting to get good at some skill, it is not as useful to think about or even talk about neuroplasticity. (…) Neuroplasticity simply tells us that the brain changes in response to learning. It doesn’t tell us how to learn better, or what pedagogical techniques work best for effective learning. And the research around neuroplasticity — that the brain can adapt to trauma, that physical therapy after a stroke works because it depends on the brain’s ability to reconfigure neural connections — isn’t as useful for practice as you might think; it doesn’t tell you anything instrumentally actionable that you may incorporate into your practice.
In other words, neuroplasticity operates at the wrong level of abstraction for learning. It describes what happens to your neural connections when you’re engaged in practice, but neuroscience isn’t the right level to mine for pedagogical insight. For that, you’ll have to go one level up — to the realm of cognitive science, or psychology.
The period in which I noticed this coincided with the period in which I realised my Because Asia essay was horribly, terribly wrong.
How might we use this idea?
The first implication we have already covered in this essay: be suspicious of ideas that are taken from one level of a complex adaptive system and are then applied to a higher (or lower) level. It is likely that they aren’t as useful as you might imagine them to be.
The second idea is a heuristic that I attribute to my friend, Jin: “when reasoning about your career, think one level above, or one level below.” Or, as he puts it more succinctly: “think ‘plus or minus one!’”
The intuition behind this is familiar — when you’re thinking about your career, you want to pay more attention to proximal causes, instead of remote ones. This usually translates to understanding the rules of the level directly ‘above’ and ‘below’ you. So, for instance:
- If you are a low-level designer or software engineer, you might want to pay attention to the shifts at the organisational level above you, but you probably shouldn’t be paying too much attention to the macro-environment shifts that might affect the company at large. It sounds smart to say “oh, the government’s coming after us for anti-trust reasons, so this is bad for my career and I should bail” or “oh, the business press believes that for structural reasons, my company is doomed, I should bail”; but the reality is more complex — certain departments in your company might win in such a shakeout, while others might lose; it’s really difficult to see what results in a net win for your career if you read the situation based on remote causes alone. (Here’s a thought experiment: imagine if Jony Ive — who was rotting away in Apple’s industrial design group at the time of Jobs’s return to Apple — imagine if Ive had quit before Jobs returned, based purely on industry consensus about the fate of Apple (in 1996, the consensus was ‘Apple is doomed!’). Think about what a waste that would’ve been! Instead, Ive listened to his boss — Jon Rubinstein — who argued that Apple ‘was going to make history’ after Jobs returned; Rubenstein knew what Jobs was going to do with the industrial design division when he returned.)
- If you were a CEO, or an exec, then it is totally worth it to pay attention to regulatory shifts — since this is one level above you. But pretty much everyone else in the org would do well to down-weight news from the macro-environment, because there are better (lower) parts of the system that they should pay attention to.
Again, the core idea is to focus your attention on things that are closer to your level, depending on where you are in the system — if you’re a lowly employee, it would do to cultivate information sources embedded one level above or below you; if you were a student looking for a job, it would do to investigate the factors that affect the companies you are interested in applying … but perhaps not much more beyond that. Don’t do things like extrapolate from a government jobs report, or an analyst’s opinion of some company; look for signals and ideas at the right level of abstraction in your career and in your life.
The third way we can use this idea is to recognise it for what it is: a thinking tool. Are there remote causes that affect everyone, regardless of where you lie in the system? Of course there are: war, for instance, or a global pandemic. The key isn’t to say “seek ideas at the right level of abstraction … and discard everything that’s potentially remote or low level”; the key is to recognise that “only looking at remote levels to calculate optimal action” is itself a failure mode, and you don’t want to go to either extreme.
The reason I’ve spent an entire essay arguing that we should ‘seek ideas at the right level of abstraction’ is because I think that the opposite habit — ‘use high-level analyses as a justification for our actions’— is a particularly pernicious trap for smart, analytical people. We do this because it’s a narrative stereotype: we think that geniuses must extrapolate from high-level analyses to individual action, and therefore we should do the same.
Take for instance, the oft-told story that Jeff Bezos found a ‘staggering statistic’ while at D. E. Shaw:
“I found this fact on a website that the web was growing at 2,300 percent per year,” Bezos told CNBC in a 2001 interview. “The idea that sort of entranced me was this idea of building a bookstore online.”
Or this clip from the TV series Silicon Valley, where investor Peter Gregory extrapolates from the breeding cycles of cicadas to the price movements of sesame seed futures.
The pattern is the same: an analytical genius synthesises a number of high level trends and divines some future state of the world, and then decides on a course of action to take advantage of that future state. Never mind the narrative fallacy, Bezos is today one of the richest men in the world; Peter Gregory is portrayed as an eccentric genius billionaire on Silicon Valley (and based on another eccentric genius billionaire in real life). We read stories like these and internalise the idea that to appear smart, we must do something similar.
This is likely why I was drawn to analysing my career in terms of a global shift to Asia, and why so many of my juniors are drawn to creating career plans from some sophisticated analysis of the world. It explains why people think they can take ideas from lower levels of abstraction and apply them upwards. It's why smart entrepreneurs in small ecommerce companies can say things like “I think Singapore's advantages in global shipping will lead to advantages for ecommerce firms here” and proceed to get everything wrong.
We think we’re being smart. We think we can extrapolate up and down the causal chain, across multiple levels in a complex adaptive system. And perhaps some of us are truly genius enough to do so. But most of the time, I suspect that we’re just acting a part: we think we're demonstrating intelligence ... but we're simply playing with ideas at the wrong level of abstraction.