Business Expertise: The Importance of Cognitive Agility

Feature image for Business Expertise: The Importance of Cognitive Agility

Table of Contents

Sign up for the Newsletter

    Once a week. Three links. No spam. Unsubscribe anytime.

    This is Part 2 of a series on business expertise. Read Part 1 here.

    In our previous post, we took a look at Lia DiBello’s work on the mental model of business expertise. We learnt that business expertise consists of two parts: domain specific mental models that adhere to a triad (supply, demand and capital), and a construct that DiBello names ‘cognitive agility’. Cognitive agility, as she puts it, is ‘the extent to which an individual revises his or her evaluation of a situation in response to data indicating that conditions have changed’.

    DiBello’s claim is that an exec’s cognitive agility is more important for overall business expertise, as opposed to general problem solving ability. I wrote, in the previous piece:

    For several years now, I've tried to talk about the limits of frameworks, as experienced from the perspective of a business operator. I've noticed that when a market changes, or when you're operating in a local environment, no framework can fully capture everything that you're seeing as you brush against reality. I've attempted to capture this idea by writing blog posts such as Reality Without Frameworks, and How First Principle Thinking Fails and Good Synthesis is the Start of Good Sensemaking. More importantly, I've attempted to articulate DiBello's notion of cognitive rigidity, with a trait I invented that I named 'Dismissively Stubborn'. I wrote that piece to explain the type of people I’ve learnt to avoid in startup environments.

    But it turns out that DiBello had thought of this, and more, a full decade before I did.

    My biases are pretty clear: DiBello’s articulation of cognitive agility was deeply gratifying to me, as I’ve long suspected that the worst people to hire in a startup environment are those with low cognitive agility. (The question, of course, is how to detect low cognitive agility; the last person my old boss hired with this trait cost us more than a million dollars in profits. I still don’t have a good test for it, beyond ‘work with them for awhile and see’.)

    In this essay we’ll take a closer look at the concept of cognitive agility. The idea turns out to be more interesting than first meets the eye, mostly because it is deeply tied to some of the learning theories that underpin DiBello and co’s accelerated trial-and-error training programs.

    In other words, cognitive agility tells us a lot about the way we learn — and fail to learn — in business and in life.

    The Argument for Cognitive Agility

    I went into the research for this piece thinking that DiBello had authored a full paper on cognitive agility. I quickly learnt that she had not. The only explicit argument that DiBello makes for the construct — at least, as far as I can tell — are in the following two passages from her 2010 chapter in Informed by Knowledge:

    Given the volatility of markets and the complexity of team decision making in large organisations, we need not be as concerned with the general cognitive ability of today’s executive so much as his or her domain-specific expertise and cognitive agility, which is often associated with the kind of intuitive expertise developed through experience (emphasis added). Cognitive agility is what allows executives to rapidly respond to changing situations and revise guiding mental models to meet performance demands. In sum, these insights require us to shift away from thinking of the cognitive ability of business leaders as fixed and static, and instead to focus on the way in which expertise evolves over time in response to dynamic environments. It also requires us to discard attempts to locate decision-making expertise as a fixed capacity that resides within the individual decision maker. [1]
    Building on theories of emergence, adaptation, and flexibility from complexity science and NDM, we use the term cognitive agility to refer to the extent to which an individual revises his or her evaluation of a situation in response to data indicating that conditions have changed. The converse is cognitive rigidity, where the person is impervious to new data, being dominated by a rigid framework or paradigm that acts to filter out new, possibly relevant, information, creating blind spots (DiBello 1997). [1]

    That last passage cites a chapter titled Exploring the relationship between activity and the development of expertise; paradigm shifts and decision defaults, which is in turn about an intervention that DiBello did with a large remanufacturing facility in the late 90s, relatively early in her career. The facility was attempting to use a complex Material Requirements Planning (MRP) system — a system that usually doesn’t take when introduced in manufacturing organisations.

    (The polite way of describing this is ‘MRP was selected as a domain because it represents a class of technology that is widely known to require users who understand its underlying principles and it has a high failure rate because it is so difficult to learn.’ The not so polite way of describing this is that it was a ‘digital transformation project’ — a phrase that should fill most technologists with dread.)

    The problem, of course, is that MRP systems are designed to express a ‘theory of manufacturing’, and that theory is, well, completely unintuitive to many people in traditional manufacturing. DiBello writes:

    (MRP) instantiates certain key economic concepts such as zero inventory and just-in-time production and is based on principles of manufacturing (for example, formulas regulating how future orders are forecast) developed over the last several decades. (…) Employees working with the system must translate the company’s anticipated demands into a form that the MRP system can ‘understand’. This is done via a Master Production Schedule (MPS), which the system then interprets as a set of long range, abstract production goals for the company’s finished goods. With the information the system has on ‘what’ a particular finished good is (e.g. what parts go into it, what operations are involved, how long it takes to make each of its component parts and assemble it finally), it makes recommendations for every action leading up to the company’s preset goals. This includes deciding on start dates and quantities for production orders and determining the most efficient pattern of purchasing. [4]

    In other words, MRP system implementations demand that its users understand the ideas and principles of lean manufacturing, or they can never use the software the way it was intended. To hear DiBello tell it, this particular MRP system implementation demanded that air brake maintainers, supervisors and analysts in the remanufacturing plant internalise the following three ideas:

    1. That the MRP system conceives of all parts, assemblies, and finished goods as hierarchically arranged items, residing on ‘levels’ that map onto how a given item is manufactured.
    2. That the MRP system calculates the timing of events beginning with a future date and moving back to the present, instead of using a linear, chronological representation of time. This is referred to as phased time.
    3. That quantities in the system are not absolute, but relative to time and item. When making inquiries about how many of a given part are in inventory, the system calculates a virtual, or relative quantity, based on a number of time-sensitive factors.

    These three ideas, DiBello writes, caused MRP systems to ‘organise data and operations in a way that strike many people as counterintuitive.’ This goes a long way in explaining why MRP systems are so difficult to use.

    So what did DiBello do? Did she give these workers lectures, and powerpoint slides, and workbooks? Did she split her training efforts into three different strategies, tailored for analysts, supervisors, and mechanics?

    No, she did not.

    DiBello’s approach contained all the seeds of her later work. What she did was that she got the entire facility together to play a simulation — she asked the workers to ‘manufacture’ three models of an origami starship, each with common and unique parts, purchased from ‘suppliers’, stored in ‘inventory’, and pushed to ‘customers’. The workers were told to run their production lines however they wanted; their only goals were to reduce inventory and maximise profits by the end of the game.

    Predictably, the workers did what they were used to in their day-to-day work during the first round of the game. And DiBello and team sat back and watched them burn:

    During the first morning of the first day, participants were allowed to run their factory according to any organisational scheme they wished. This invariably failed to achieve MRP goals (low inventory, increasing cash flow) and by the end of the first morning, most teams of participants were bankrupt, were unable to deliver, or began arguing with such intensity that the game had to be stopped. [4]

    DiBello and team then took over:

    When this occurred, the game was paused and participants filled out a number of forms that helped them examine their patterns of decision making, evaluate their assets and losses, and reconstruct what had happened. These results were then compared by the participants with an MRP ideal of purchasing, production, and cash flow. After this phase, the participants were facilitated in the construction of a manual MRP system and introduced through various activities to its logic and overall functioning. Once they had a fully built and implemented ‘system’, they played the game again with the same customer orders and budget, but with very different results. [4]

    The idea here isn’t just that ‘learning by doing’ is superior to ‘learning by powerpoint’ (and in fact the primary focus of the paper was on the pedagogical implications of using ‘constructive’ instruction vs ‘procedural’ instruction); the idea that is relevant to us is that — in order for learning to happen — participants had to have their old mental models destroyed through visceral failure, in order to make way for new models. DiBello continues:

    It appears that ‘constructive’ activities permitted individuals to reorganise the implicit mental models driving the decision process. As indicated, during the first morning of the workshops, the teams of participants could ‘run’ their factory in any way they chose. Without exception, under this kind of pressure, all groups ‘defaulted’ to the strategies they normally used at work. Comparing the performance in the workshops (as indicated by the cash and material flow patterns as well as the participants’ notes and inventory control records) with work history indicates a strong correspondence between current work practices and decisions under pressure. For example, mechanics almost always delivered a high-quality product on time and thereby managed to stay financially above the water by ensuring income. However, they often did so by buying finished assemblies (as opposed to raw materials) at great cost, by accelerating production (which has the real-world equivalent in ‘overtime’) and by overbuying all materials. Thus, they sacrificed profits.

    In contrast, Analysts and Supervisors were often not successful at meeting customer orders but they were very hesitant to overspend; often they could not meet orders because they had waited too long to decide to purchase the necessary materials. It is important to note that individuals resorted to their default strategies even when explicitly instructed to operate differently and when they were aware that the goal of the game was to make the most profit and buy the least material.

    As indicated, during the workshops’ first pause, participants evaluated their strategies and decisions using various tools. During the discussions that followed the self-evaluation, participants began to reflect on these default strategies and examined how a different approach might have accomplished their goals. It was only at this point that participants became aware of the assumptions guiding their decisions under pressure and the attendant results. We have come to call this the ‘deconstruction’ phase and the beginning of ‘reorganisation’. During the MRP-oriented ‘constructive’ activities that followed in the next day and a half, participants were introduced to MRP concepts as they continued a pattern of inventing a solution using their intuitive understanding of manufacturing, noticing its mismatch with their goals, reordering what they had done, and comparing it again to their goals. At the end of this process, all participants arrived at the same solution but, it is important to note, they had all gotten there via different paths and had begun the process through sometimes radically different entries.  [4]

    It took me awhile to link this study to the construct of cognitive agility. Eventually I realised that DiBello was drawing on — and had contributed to — a set of ideas that were in the air in the Naturalistic Decision Making (NDM) research community at the time, a set of ideas that I believe eventually settled into something we now call Cognitive Transformation Theory (courtesy of Gary Klein, 1997). [2][3]

    Cognitive Transformation Theory is a macrocognitive theory of learning. Macrocognition is a fancy word that means ‘observed cognition that is performed in natural settings, instead of artificial (laboratory) environments’. What this implies is that Cognitive Transformation Theory describes how people learn when goals are ill-defined, when conditions are messy and complex, when there is no clear feedback, when the pupil has no formal instruction, and when there exists no good pedagogical development in the domain. In other words, CTT is a theory that describes how people acquire expertise in real world environments. Amongst other things, it is a theory that tells us how people become masters of business.

    A compressed summary of the theory, from Accelerated Expertise, explains it like this:

    Cognitive Transformation Theory
    Core Idea
    1)  Learning consists of the elaboration and replacement of mental models.
    2)  Mental models are limited and include knowledge shields.
    3)  Knowledge shields lead to wrong diagnoses and enable the discounting of evidence.
    Therefore learning must also involve unlearning.

    Empirical Ground and Claims
    - Studies of the reasoning of scientists
    - Flawed “storehouse” memory metaphor and the teaching philosophy it entailed (memorisation of facts; practice plus immediate feedback, outcome feedback).
    - Studies of science learning showing how misconceptions lead to error.
    - Studies of scientist and student reactions to anomalous data.
    - Success of “cognitive conflict” methods at producing conceptual change.

    Additional Propositions in the Theory
    - Mental models are reductive and fragmented, and therefore incomplete and flawed.
    - Learning is the refinement of mental models. Mental models provide causal explanations.
    - Experts have more detailed and more sophisticated mental models than novices. Experts have more accurate causal mental models.
    - Flawed mental models are barriers to learning (knowledge shields).
    - Learning is by sensemaking (discovery, reflection) as well as by teaching.
    - Refinement of mental models entails at least some un-learning (accommodation; restructuring, changes to core concepts).
    - Refinement of mental models can take the form of increased sophistication of a flawed model, making it easier for the learner to explain away inconsistencies or anomalous data.
    - Learning is discontinuous. (Learning advances when flawed mental models are replaced, and is stable when a model is refined and gets harder to disconfirm.)
    - People have a variety of fragmented mental models. “Central” mental models are causal stories.

    Those of you who are familiar with mainstream ideas of ‘unlearning’ will probably be rolling your eyes at this theory. You might think that the ideas are obvious — and indeed they are! The entire field of organisational learning, for instance, has been harping on about unlearning for decades. And developmental psychologists have long documented how kids have to be ‘corrected’ on their intuitive — but wrong — model of real world physics.

    But keep the following ideas in mind: first, this theory describes learning in messy real world environments. For instance, Klein observes that leadership ability is a domain of complex expertise, with huge time lags between actions and consequences. Learners must introspect (that is, sensemake) in order to sort out cause, effect, and coincidental events that have nothing to do with their actions. This is true of learning any skill in an unstructured domain.

    Second, the implications of knowledge shields and sensemaking (their term for introspection for cues, feedback, and causal narratives) might be more significant than you might first think. [3]

    Let’s walk through the nuances together.

    In a sentence: Cognitive Transformation Theory tells us that people learn when they replaced flawed mental models with better ones, as a result of trial and error cycles. As they conduct those trial and error cycles, they receive feedback in a noisy environment, and reflect on past experiences in light of new information they generate from their actions. (In many cases, they must reflect based on delayed feedback — which is an added complication).

    What makes one person more effective at trial and error when compared to another? Well, CTT tells us that their effectiveness is limited by their ability to unlearn previous mental models in the pursuit of better, more effective new ones. This is easy to say, but terribly difficult to do. The theory tells us that people are fundamentally resistant to discarding their models; they hold on to old models for all sorts of reasons. Worse, the more expertise they have, the more capable they are of resisting model destruction:

    As people become more experienced, their mental models become more sophisticated, and, therefore, people grow more effective in explaining away inconsistencies. Fixations should become less tractable as cognitive skills improve. Therefore, people may have to unlearn their flawed mental models before they can acquire better ones. Sensemaking here is a deliberate activity to discover what is wrong with one’s mental models and to abandon and replace them. (emphasis added) [3]

    Often, a person will refine an existing, flawed, model, which then limits them in some way; they will stick to this model and explain away inconsistencies and other anomalous data. This is known as a ‘knowledge shield’. Knowledge shields tell us why learning seems discontinuous when you’re observing someone levelling up in the real world — people are often limited by how quickly they can break through knowledge shields, and leave their old mental models behind.

    The core claim of the theory is that as expertise increases, the work needed to replace flawed mental models goes up, for reasons we have just covered. In other words, a program of effective learning in an unstructured domain (such as business) implies that you must acquire increasingly sophisticated techniques to double-check and if necessary, destroy, your current set of mental models. Worse, you must walk through all the implications of the previous, flawed model, and update them once the flawed model has been discarded. This usually implies a period of holding multiple inconsistent beliefs at the same time.

    This phenomenon is consistent with the observation (common in expertise research circles) that experts go above and beyond what is necessary to acquire feedback for their work, even though — or perhaps because! — they make fewer mistakes as compared to novices. It is also consistent with Robert Kuok’s observation that Chinese businessmen listen carefully and are ‘able to distil wisdom from the air’.

    All of this, of course, is simply a description of cognitive agility. The argument for DiBello’s construct is now easy to make: if learning in ill-structured domains demands that you refine and unlearn existing mental models, then the speed with which you are able to do just that will determine how quickly you are able to level up. Therefore: since business is an ill-structured domain, cognitive agility must be something that affects the acquisition of business expertise. As we’ve discussed previously, DiBello’s FutureView Profiler is designed to measure this very property in the exec teams she evaluates.

    Cognitive Transformation Theory also helps us understand the reasoning behind DiBello’s training methods. It tells us that a training program to accelerate expertise should be optimised to break knowledge shields, and quickly. In other words, DiBello’s methods work because she lets teams fail in a very rapid, public manner, within a simulation that feels like the real company they work in. This visceral failure produces what she calls a ‘deconstruction’ phase, which then allows her team to step in and reorganise the company’s mental models.

    In her NDM podcast interview, she says that her obsession with this notion of ‘cognitive reorganisation’ was what drove her for most of her career. It was what ultimately led to the simulation approach to her consulting practice, one that she has refined over the course of two decades with large corporations.

    Two Real World Examples

    Everything seems remarkably simple and easy when articulated at the level of pure theory. I want to leave you with two examples, drawn from recent experiences, that hopefully illustrate why learning in ill-structured domains is still horribly difficult. In the process I’ll give two learning techniques that I’ve found to be somewhat useful in attempting to break knowledge shields.

    (The implication from CTT is that these two techniques may prove insufficient as I become more competent. Nevertheless, I will present them here because I have found both useful).

    Both examples are with friends who are more intelligent than I am (by which I mean they easily outperform me in standardised test scores). I tell these two stories mostly because they go well together; I’ll leave stories of my own failings for some other post.

    Learning Marketing

    I was working with a friend on a marketing campaign. Over the course of the marketing campaign we worked with a more junior employee, who by my assessment was more believable in one particular aspect of the project than the two of us combined (this junior teammate was very good at copywriting, and had at least two past successes executing landing page rewrites). This was despite the fact that he was rather junior when compared to us.

    As we were working on our set of landing pages, I noticed that my friend kept changing things around to do things his way. I had enough marketing knowledge by then to understand that he was making subtle, repeated errors, and that he was changing things that our junior teammate had added for specific persona-related reasons. The junior teammate kept resisting, but he was not as good at making coherent arguments, especially not to someone who was more senior to him.

    I told my friend to defer to the junior employee. He resisted, and said that his opinions on the landing pages were valid. This happened multiple times over the course of a week, until I finally lost it, and gave him a link to my article on Believability.

    “Read this,” I said, “And internalise it. This person is more believable than us, so instead of arguing, you should listen to the arguments that he is making.”

    “I am listening to his arguments.” My friend said. And to his credit, he was — but he was also debating endlessly over changes to the landing pages.

    “No,” I said. “You’re listening, but you’re not listening. You keep trying to map his explanations to your existing mental models of marketing. And then they get lost in translation. Because your models are bad. You are not believable in this domain. You’ve not done much marketing. And so your intuitions are wrong, your models are wrong, and your opinions are all subtly mistaken. Mapping everything that he’s saying to something you already know is just slowing (your learning) down. You should set aside all your notions of this domain, and listen carefully to what <name of junior employee> is trying to say.”

    We spent some time later that day listening carefully to the junior employee. By the end of our call, even I — who thought I was good at writing marketing copy — learnt a number of powerful new ideas. (I also had to apologise for my brusque tone, after reflecting on my behaviour for a bit)

    Admittedly, I was only able to be direct with my friend because we both trusted each other, and because I was credible in his eyes. I explained that Ray Dalio’s concept of Believability had rapidly accelerated my effectiveness in the years since I learnt it. He knew I had the history to back it up: he had seen how I had built up my old office in Vietnam — in a foreign city, mostly speaking a language that I didn’t understand, working in a part of the world with weird bureaucratic systems — and he saw how quickly I got better at content marketing after I left that role. I told him that the goal of listening to more believable people was so that you could accelerate the acquisition of new models; why bother arguing when you are in conversation with someone who knows what they’re doing?

    I added that — sure, maybe the new model you’re listening to isn’t 100% right. But there is zero cost to setting aside your own models, in order to try the new ones on for size. I argued that you could always down-weight them later once you gain more experience in the domain. I’m not sure if my friend totally agreed with me, but we both deferred to the junior employee.

    The landing pages did well.

    Bootstrapped Businesses

    A different friend told me that she would never again run a bootstrapped business in a crowded industry, nor one that was facing secular decline. She was in the process of transitioning away from her current business — which was in a problematic industry, though not a crowded one. Her conclusions were consistent with her experience. I just thought that they were the wrong ones.

    I pointed out a few counter examples. For each example that I brought up, she came up with a reason for why it didn’t apply. Either the company was an exception, or it was in the right place at the right time, or the market dynamics were weird, or it didn’t face terrible competition. I couldn’t convince her, and eventually we agreed to disagree.

    Why did I think she was drawing the wrong conclusions?

    First, I thought that there were sufficient counter-examples of bootstrapped companies in crowded markets with well-funded competitors:

    • Basecamp, in project management software.
    • SignWell in e-signature software.
    • SavvyCal in calendar scheduling.
    • ConvertKit in email service providers.
    • Before ConvertKit, Drip in the same space — email service providers.
    • Postmark in email API services.

    Of course, my friend had a neat explanation for why none of these companies required her to change her mental model. I, on the other hand, found it implausible that six companies were all ‘exceptions to the rule’. At some point an exception is not an exception, but a challenge to the coherence of the model itself.

    Second, I knew from listening to Rob Walling’s podcast that he believed it was totally legitimate to go after a crowded market with huge competitors. Walling was the founder of Drip, one of the companies that tackled an extremely crowded, competitive space; the way Walling usually puts it is “if you go after a small market, the problem is finding customers; if you go after a big market, the problem is competitors.”) When phrased in this manner, the question becomes ‘how do you thrive as a bootstrapped company in a large market?’, not ‘can you even start a successful bootstrapped SaaS in a large market?’ — since multiple examples already exist for the latter.

    On top of that, I knew that Walling had brought on founders of similar companies onto his podcast in the past, and they calmly talk about attacking large markets with well-funded competitors as if it was a normal, totally acceptable bootstrapper thing to do (one, two, three). In such conversations, the founders usually talk about finding a niche, or working out some positioning to give them the space to succeed. They don’t avoid crowded marketplaces — in fact many of them are eager to build in well established industries.

    Third, many of those people were extremely believable bootstrappers — certainly more so than my friend and I combined. SavvyCal, SignWell and Postmark’s founders, for instance, were people with more than two successful companies or bootstrapped products to their name. Walling himself had done five.

    Since I had the ‘set aside preconceived notions and listen carefully to more believable people’ thing going on, I thought they were more likely to be right. I couldn’t be sure, of course. But a believability-weighted evaluation of their beliefs made it seem more probable that they knew what they were talking about, and that my friend was overfitting conclusions based on her one business experience.

    Finally, I thought that my friend had unnecessarily restricted her range of options. Her conclusion was that since a) you cannot run a successful bootstrapped business in a crowded market, unless b) the market is itself rapidly expanding, therefore c) you should either raise venture money if you want to compete in a crowded market, or d) you should play in an uncrowded market that sucked. This conclusion meant that her options were one of two fairly restrictive ones. I thought these constraints were too onerous. I worried that she might be setting herself up to play a harder game.

    (On the flip side, said friend pointed out the implications of certain SaaS metrics that I had not fully thought through, so I found that I had to reconfigure my mental models of the domain. This was rather uncomfortable for me. Turnabout is fair play!)

    In the end, though, I didn’t push too hard. I could totally see a universe in which my friend raised venture capital and built a large company and succeeded. And that’s the funny thing about learning in unstructured domains: sometimes, even with incomplete models, you win big. Sometimes, even with the wrong conclusions, you can become successful.

    Which probably just describes life.

    Wrapping Up

    In a way, the importance of cognitive agility in DiBello’s Profiler is simply the flip side of the strategy she uses in her training programs. Most people, in most domains, must be violently shown that their models are deficient before they are willing to learn new ones. This is as true in business as it is with learning in any real world domain.

    But people with higher cognitive agility are able to do without that. One way of looking at the FutureView Profiler is that execs at the top end of the business expertise curve should have commensurately higher cognitive agility — and if they do not, then it’s likely that they are only able to perform in kind business environments. Individuals with higher agility are able to evolve their mental models in messy, delayed-feedback-environments — and draw the right lessons from them. They are more willing to discard models, and deal with the implication fallout from tossing old models aside. Most importantly, they actively fight against the illusion of model coherence the higher they climb in their respective skill trees.

    All of which is easy to say, but ridiculously difficult to do. When seen in this light, the problem of cognitive agility reduces down to a simpler question: “How much energy are you expending to check the coherence of your mental models?”

    The answer, if you’ve gotten any good at what you do, should sound something like ... “more”.

    Sources

    1. Lia DiBello, David Lehmann, Whit Missildine, How Do You Find an Expert? Chapter 17 in Informed by Knowledge: Expert Performance in Complex Situations.
    2. Gary Klein (1997), Developing expertise in decision making in Thinking and Reasoning
    3. Gary Klein & Baxter (2009) Cognitive transformation theory: contrasting cognitive and behavioral learning, in The PSI handbook of virtual environments for training and education: Developments for the military and beyond, Vol. 1, Education: Learning, requirements and metrics.
    4. Lia Dibello (1997). Exploring the relationship between activity and the development of expertise; paradigm shifts and decision defaults. In C. Zsambok & G. Klein (Eds.), Naturalistic Decision Making.

    Originally published , last updated .

    Member Comments