This is part of the Market topic cluster, which belongs to the Business Expertise Triad.

What I Learnt From Complexity

Feature image for What I Learnt From Complexity

Table of Contents

Sign up for the Newsletter

    Once a week. Three links. No spam. Unsubscribe anytime.

    I wrote a tweet a few days ago that went:

    Complexity is a truly weird book. The story the book tells is compelling — or as compelling as a story about any scientific movement can be — and the author, one M. Mitchell Waldrop, is known for historically important books. Even the concept of a complex adaptive system is easy to grok. But the second and third order implications of the idea are pernicious as hell; it has affected the way I see everything around me.

    This is perhaps made more remarkable by the fact that Complexity is a non-fiction narrative book. I expect non-fiction idea books to change my worldview — after all, you expect a book to make its argument clearly so that you may grapple with it, and through the grappling you may find yourself agreeing or disagreeing with the arguments and change your mind. It’s rather rarer that a story gets inside my head and violently rearranges my mental furniture for me.

    I want to talk a little about these ideas, so that I may perhaps convince you to pick up the book and give it a go. But the thrust of this piece is simply that the concept of a complex adaptive system is a hell of an idea virus, and I’m still trying to work out the implications on my worldview; the Waldrop book just happens to be a delightfully written 477 page infectious vector. I think it’s very good.

    The Story of Complexity

    Complexity tells the origin story of the Santa Fe Institute. The SFI is an independent, nonprofit theoretical research institute dedicated to the multidisciplinary study of complex adaptive systems. That’s a lot of fancy words in one sentence, so I’ll tell you a simpler story to explain the arc of Complexity’s narrative instead.

    A complex adaptive system is a system in which a large network of components with no central control exhibit complex behaviour, sophisticated information processing, and adaptive learning. (I’ve stolen this definition from Melanie Mitchell’s Complexity: A Guided Tour, which is, uh, less easy to read compared to Waldrop’s Complexity). The easiest way to think about what this means is to imagine … well, traffic:

    • Traffic is an emergent, complex behaviour that shows up with no centralised control (duh).
    • It displays sophisticated information processing — if you expose even a little information to drivers (perhaps by way of Google Maps) you change the collective behaviour.
    • It has adaptive learning — if you build a new highway, or a new road, traffic quickly learns to adapt to it, though occasionally in ways you do not predict.

    More importantly for our purposes, traffic is an excellent analogy for complex adaptive systems because most of us have had very visceral experiences with traffic: we’ve either been stuck in traffic, or have had very strong feelings about the asshole (there is always one) who caused the car accident that caused the jam that led us to miss our flight.

    We’ve also all had to make commuting decisions that take traffic into account. Consider the following, obvious, properties of decision making in the face of traffic:

    • It is impossible to predict commute times with a high level of accuracy. For instance, you would regard me with some scepticism if I said “on the third Friday of August next year, your commute to work would be exactly 24 minutes and 53 seconds.” Because how the hell would I know? Perhaps a man might decide to cross the street at exactly the wrong hour on the third Friday of August 2022, which results in a car swerving to avoid him in exactly the wrong way, leading to a five-vehicle pileup that causes a snarl of horrible traffic the likes we have never seen before? Or perhaps enough people decide to leave their offices early that they cause a larger-than-average slowdown in traffic? It is impossible to tell what might happen this far in advance; the point here is that small changes may lead to unpredictable end states in a complex adaptive system.
    • This doesn’t mean that we can’t make decisions about traffic, though. We may, for instance, observe the current state of the system, by using apps like Google Maps and Waze (or listening to traffic reports on the radio), in order to pick different routes or different commute times.
    • And it doesn’t mean that we cannot describe or plan around the system. For instance, while accurate commute time predictions far into the future might not be possible, we may make useful generalisations about traffic in the current system, like “be careful of the Dairy Farm exit on the PIE; that usually jams up on Friday evenings” — with the full knowledge that such generalisations are conditionally true; they may change if new roads are built or if collective behaviour changes (perhaps because remote work takes over or if the centre of industry moves elsewhere).

    We’ll return to the analogy of traffic in a bit, but it’s useful to keep the idea at the back of your mind. Complexity, of course, observes that much of the world is like traffic.  But what is really important to notice here is that there are two possible approaches to thinking about traffic:

    1. You may attempt to derive an ‘analytical’ solution — that is, a set of equations that allow you to plug in numbers to predict and therefore make decisions about the system. The argument goes something like “Look, we can calculate the paths of planets precisely enough that we can launch space probes that slingshot past planets and moons. Surely we can do this for traffic?” And indeed, this is the approach that mainstream economics, say, has taken for much of its history — economists believe that while they cannot reason about the whole economy, they can slice off portions that are mathematically tractable enough to be useful.
    2. The second approach is to just give up and declare that no ‘analytical’ solution is possible, and argue that we should treat the system as something … different. Here the arguments for what we should do vary: some researchers think that we should develop new forms of mathematics to deal with the weird non-linear behaviour of a CAS; others argue that no analytical solution is possible and that we should just focus on simulations instead.

    The Santa Fe Institute was created to explore that second approach — and what an approach it is! Complexity tells the story of the approach as applied to physics, and biology, and economics, and artificial intelligence, and history, and on and on, but also the story of the early researchers who were desperately trying to get a funding model and an institutional structure for a new type of science off the ground.

    How has the Santa Fe Institute performed in the decades since the events of the book? This is decidedly more mixed than I would’ve wanted. It is tempting to look at the math coming out of the institute — and the apparent lack of influence it has had on the mainstream versions of its fields — and conclude that it is mostly a fringe approach, done at a fringe institute with fringe influence. This is not exactly true (the field won its first Nobel Prize recently), and scientific revolutions such as the one typified by SFI’s agenda take time to get going.

    But that’s besides the point, I think. If you are a business investor or an operator, you might not be so interested in the various contributions that Complexity Science has made across the varied fields it touches. Instead, you are more likely to be like me — interested in better models of reality, so that you may make better decisions. And what I’ve found — buried inside Waldrop’s Complexity — is that the idea of a complex adaptive system is in itself useful, and it is what has haunted me for the past year.

    Action without Prediction

    Here is one worldview implied by a complex adaptive system: you cannot predict what will happen in the future. History is like traffic: even tiny events might snowball into world wars. This is perhaps blandly obvious to you, as it might be to anyone who has read their fair share of history.

    But here’s the kicker to that worldview: “… and therefore you must learn to act without prediction.

    In my experience, this is the bit that’s difficult to wrap your head around. I’ve certainly struggled with this idea — because, if you think about it, what does it even mean to act without predicting the future? Isn’t it true that making decisions is to bet on some future state of the world? How can you possibly take action intelligently if you don’t know what might happen as a result of your actions?

    It doesn’t seem obvious that Complexity would lead you down this path, or towards this particular worldview, but if you trace the thinkers, businesspeople, and investors who are taken in by SFI’s ideas — and there have been many — you’d eventually come across this idea in all its various forms.

    Let’s make this more concrete, more visceral. Compare these two worldviews:

    “There is no compelling use case in crypto, so I do not believe I will invest in anything in the space. Until I can find a compelling narrative that is satisfactory to me (or some combination of that and social pressure and FOMO) … I will not invest.” This is a statement borne out of the ‘action is prediction’ worldview — which most of us reach for. We feel uncomfortable when we are presented with something completely uncertain, which tends to happen when something is novel. This is human nature. Our brains are sensemaking organs, and they work by generating explanatory narratives for the chaos of reality — in this case, by attempting to map the phenomenon to something we already know.

    But, consider this stance instead: “I have no idea what crypto will become, and I will not attempt to guess. I’ve read enough history to know that nobody can tell what uses cases exists for a nascent technology — so why do I think I am different from those who have come before me? Groundbreaking innovations are the result of a complex adaptive system: they occur when actors recombine innovations across multiple small discoveries through a process of adaptive experimentation. Nobody can tell which recombinations would work in advance. No, my stance is to make small bets, clear my head of preconceptions, and just watch for the hottest areas of the space where experimentation and recombination seems to be happening the fastest.”

    If we map the two opinions to, uh, traffic, the former opinion is basically “I want to be able to predict traffic; but I cannot predict a precise commute time so I won’t commute”; the latter is “I cannot predict traffic, but I can take action to uncover the quirks of traffic flow in our current road system. So I may learn, for instance, that the Dairy Farm exit on the PIE tends to be shitty at around 6pm on Fridays. That’s good enough to take action on, when the time is right.”

    Here’s another example. Let’s say that you want to start a company. “I have a vision of the future and I will do everything possible to turn that vision into reality” is one narrative that is available to the startup founder. Another, similar one is “I think X is a huge trend, I will exploit this opportunity in the market by starting a company that does Y, which is related to X.”

    Both narratives are common in retellings of entrepreneurship. But it’s questionable if they are truly reflective of the kinds of thinking that entrepreneurs use in execution. In fact, both narratives demand a narrow view of the future to come true in order to work. Contrast this with the following type of reasoning: “I’m going to go after X. There’s something there that’s interesting that I can’t place. I can’t predict how I’m going to win right now, but I think that if I start with what I have, make bets that won’t kill me, and adapt quickly to whatever I uncover during the course of execution, I will be able to shape the future as part of a complex adaptive system.”

    The former is ‘causal reasoning’, which is taught in MBA classes; the latter is ‘effectual reasoning’, which appears to be what entrepreneurial thinking is actually made of — that is, if you believe a 2001 cognitive task analysis by Professor Saras D. Sarasvathy. You may read the original paper that describes effectual thinking here; if you’d like a more entertaining version, marked up by Sun Microsystems co-founder Vinod Khosla, you may find that scanned copy here.

    Another example. Geopolitics is a complex adaptive system because it consists of hundreds of state actors, and millions of individual ones. So contrast the two quotes below:

    (Geopolitical analyst Peter Zeihan): So the entirety of the Chinese success story, the entirety of the Panda Boom happened during the most internationally abnormal period in human history. And in that time, the Chinese grew to be the world's second largest economy, but also the most overcapitalised, over financed, overexposed, over leveraged economy in world history. Of course, this is going to end. The question on the backside is not, will China survive the fall? It won't. The question is what becomes of the Asian power balances after China's fall? (Source)

    Zeihan is charismatic. He expresses geopolitical narratives with the supreme confidence of a forecaster who can see the future. Nation states will do A, B, and C. Empires will fall. Power structures will change, in exactly the way he describes they will change. Zeihan, it appears, can predict everything, never you mind that the assassination of the wrong man at the wrong time once led to global war, that a monk setting himself on fire once led to Watergate, and that a pandemic was completely unexpected in the final months of 2019.

    Contrast that to retired Singaporean diplomat Bilahari Kausikan, who learnt his statecraft under Lee Kuan Yew:

    With both sides inclined towards prudence, I have little regard for mechanistic theories of US-China relations such as the so-called ‘Thucydides Trap’. It is true that historically, strategic adjustments of the magnitude that are underway between the US and China have either been the result of war or ended in war. But to treat someone as an enemy is to make an enemy and the theory of the ‘Thucydides Trap’ does not place sufficient emphasis on human agency: to recognise that there may be a Trap is to go a long way towards avoiding it. In any case, China will soon acquire a credible second strike capability if it does not already have one. The prospect of Mutually Assured Destruction has the effect of freezing the international order as it substantially did during the Cold War when, except in the Middle East, most geopolitical changes were due to internal rather than external developments. The primary military risk in US-China relations is conflict by accident, not war by design. (Source)

    Zeihan’s narrative is mechanistic and precise, with clear cause and effects. Kausikan, on the other hand, like Lee Kuan Yew before him, does no more than to describe the tendencies of the state actors as he knows them, and the dynamics of the global political chessboard as he’s experienced it. To overuse our traffic example, Zeihan is giving you a precise commute time prediction; Kausikan is simply describing the nature of traffic in the road system.

    Of course, if you look at their backgrounds, both approaches are understandably shaped by their respective careers. Zeihan’s job is to sell books and consulting engagements. Kausikan’s job is to ‘take the world for what it is, not what (tiny, insignificant) Singapore wishes it to be’, in order to protect Singaporean interests. The former does not need to deal with the nature of geopolitics as a complex adaptive system; the latter does.

    One final example: I’ve found it very useful to view an organisation through the lens of a complex adaptive system. I’ve long puzzled over the right way to talk about such things — organisational cultures tend to be path-dependent, meaning that they are formed by the individual actions of those who have come before. And if this were the case — if organisational behaviour was truly emergent and adaptive — then the skill of shaping an organisation should rely less on perfect prediction and more on good observation. In The Skill of Org Design, I wrote:

    The key difficulty with this task is that organisations are complex adaptive systems — meaning that they consist of individual humans responding to a messy combination of social, cultural, and economic system incentives. Their individual responses to those incentives will themselves create new org dynamics that you have to deal with. As a result, you cannot predict how the humans in your organisation will react to your changes — not with perfect accuracy, at least. So the nature of org design demands that you iterate — that you introduce some set of changes, watch how those changes ripple out in organisational behaviour, and then either roll-back the change, or tweak in response to those observations.

    This is the simulation view of the world, informed by the ideas presented in Complexity. In the Hacker News comments in response to that piece, however, people demanded that the skill take on the form of an analytical solution instead:

    Notice how there are no clear criteria for evaluation in this space? No math. No models. Just loose concepts strung together with words and sprinkled with calls to authority (e.g., Andy Grove) to add credibility. No evidence. No science. (…) Here's a simple question that should be answerable in any approach to org design. What's the optimal span of control for management at each level in the organisational hierarchy?

    What is interesting to me is that before reading Complexity, I thought very similar things — I thought that one needed to come up with some all-encompassing framework for the skill. And I couldn’t do that — I didn’t know all the possible organisational shapes that might work for every given context. Worse, I knew from experience how different team makeups and different nationalities — or even different, pre-existing org cultures — screwed with blunt attempts at org design.

    Reading Waldrop’s Complexity made me realise that whatever skill I had at org design was like any other skill in the face of a complex adaptive system: it was a skill of iteration in response to observation of a system. You don’t really need a set of universal equations to make good decisions in a CAS; you simply need to learn to watch the system, or others like it, and adapt to what you see. Just like traffic. Just like entrepreneurship.

    Complexity is a gateway drug to this form of thinking about the world.

    I highly recommend it.


    Endnote

    If you’d like to read more about making decisions in complex adaptive systems, I recommend NZS Capital’s Complexity Investing, which is one interpretation of the ‘action without prediction’ worldview, expanded to an entire investing framework.

    (To be more precise, I think NZS’s categorisation of narrow prediction vs broad prediction is more accurate — they believe that narrow prediction is dumb when faced with a CAS; broad predictions like ‘semiconductors are going to be important in the future’ are fine.)

    I should note that I’ve not internalised (or even covered) every idea in the CAS pantheon — for instance, notice how NZS's paper spends a fair bit of time talking about power law relationships and slack. These ideas, along with ‘criticality’, ‘ergodicity’, and ‘the boundary between order and chaos’ are clusters of ideas that belong to the study of complex systems, and I have spent only a tiny amount of time on them.

    Just grokking the second and third order implications of CASs took me about a year. It’ll probably take me a few more years to grok the rest.

    See also: Seek Ideas at the Right Level of Abstraction and The Limits of Applied Superforecasting.

    Originally published , last updated .

    This article is part of the Market topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→

    Member Comments