Commoncog Commoncog
Sign In

This is part of the Expertise Acceleration topic cluster.

Our First Accelerated Expertise Course

Feature image for Our First Accelerated Expertise Course

Table of Contents

The thought of business school make you go ‘eww’?

You’re in good company.

9,000+ investors and operators read Commoncog to sharpen their business acumen ... WITHOUT going back to school.

Sign up for our newsletter and get a weekly dose of good business thinking (no BS guaranteed):

    It’s been five years since I published a summary of the 2016 book Accelerated Expertise. It’s only now that I think we’ve finally managed to pull off a ‘proper’ accelerated expertise course.

    Well … perhaps ‘proper’ is not the right term. Accelerated Expertise talks about how the vast majority of accelerated expertise training programs covered by the book are underpinned by two learning theories: Cognitive Flexibility Theory (CFT), and Cognitive Transformation Theory (CTT). We’ve talked a lot about CFT over the years — Commoncog’s Calibration Case Method and Case Library are built around it. And though we’ve been experimenting with it for a long time, in all honesty CFT was quite easy to put to practice. The very second essay that I ever wrote about CFT contains instructions for how one may build a personal case library for self improvement. It is really CTT, that second theory of expertise, that I’ve found very difficult to apply. I’ve read the original CTT paper about five times now, and I never really understood how to use it.

    Well that’s no longer case. A couple of things have clicked for me in the past year, thanks in part to ongoing exposure to Gary Klein’s work on sensemaking, but also to experimentation with teaching. And so this was the first time we’ve managed to apply both theories, on actual students, and in a live course, to boot.

    I’m talking, of course, about the recently concluded Cohort Two of Speedrunning the Idea Maze (StIM). I’m quite pleased with how it went. I’ll write a little about what I think we could do better at the end of this piece, but for now, I want to open with the course recap email that I sent to students last week.

    A Course Recap

    Here is a course recap email that I sent to students at the end of StIM. Some parts of the following email are redacted because I want to keep the exact contents of the course a surprise for future students. But I want to highlight the middle bit of this email, which explains the theory of expertise we used.

    A little context is necessary. Speedrunning the Idea Maze is a course about the search for product market fit. You may read the full course pitch in the announcement for Cohort Two. The core framework for the course is Saras Sarasvathy’s ‘Effectuation’ — a method of thinking that all good entrepreneurs do. (You may read a brief summary of the concept in What We Learnt From Speedrunning the Idea Maze; you may also read past coverage of the framework here). In Cohort One, the course had the unintended side effect of “making you more entrepreneurial”. The question we had with Cohort Two was: could we make this effect more pronounced?

    Early indications appear that we’ve succeeded. But we’ll have to wait to see the longer term impact.

    The email follows:


    Hi there. Cedric here! If you’re reading this, you’ve finished Speedrunning the Idea Maze. Rhea and I would like to thank you for taking this course.

    This is your course recap.

    What is the point of a course recap?

    This is a fair question to ask. If you’ve been following along, you should already have a good sense of the ideas we’ve covered together. Four weeks isn’t a long time, after all. And while it may feel like there is a large number of ideas we’ve tried to stuff into those four weeks, in reality there’s just a simple core:

    • You cannot unsee what it means to effectuate.
    • You know how it feels to “take action to generate information”
    • You have an inkling what demand actually is.
    • And all of this is hammered home with 10 cases, fragments of which should live in your head hopefully forever.

    The purpose of this course recap is not to provide a one-stop summary of the ideas of the course (though we will do that, just a bit, so you have a single reference to go back to in the future). The main purpose of this recap is to contextualise the experience that you’ve just had.

    Here’s what we’ll do: we will present a straightforward recap, then explain the theory of learning we used to teach you, and then redo the recap through the lens of that theory.

    Hopefully this gives you an idea of what to do next, and how to use the experience that you’ve just had.

    The Simple Recap

    Here is a a brief recap of the four weeks first, just covering the material that we did together. This is a straightforward account of everything you experienced.

    1. In Week 1 we looked at ████████.
    2. In Week 2 we started the private podcast episodes, each of which ███████ █████████ and ████████
    3. In Week 3, we asked you to reflect on ████████████. We also ████████
    4. In Week 4, we asked you to reflect on ████████████████

    Ok, so that’s the material recap. But what did you learn?

    What we covered and what you learnt are two different things. We covered a lot of things. What we hoped you learnt each week is much smaller.

    Let’s talk about the learning theory we used to design this course. Then we’ll redo that recap, but this time from the perspective of that theory.

    Cognitive Skills, not Procedural Skills

    Here is a rhetorical question: what can be taught in four 90 minute sessions, spread out across four weeks? The answer, if we are talking about procedural skill (i.e.: how to do things) is … not much. Concert pianists practice every day for years to get good at playing the piano; football players train for hours each day, supplemented by physical training, in order to win matches. Closer to home: programmers, writers, managers, marketers and salespeople must put in dozens of reps in order to get good at their respective domains.

    So realistically, we cannot do much with 360 live minutes over the course of a month. Even if you add in the two hours or so a week of class prep you did, that’s still not much in the grand scheme of things.

    But we can do a lot if our goal is cognitive skill. At its core, cognitive skill is about being able to ‘see as the experts see’. This means that you should be able to see cues and subtle perceptual discriminations that experts use to make sense of situations; it also includes explicating the causal structures in cases, so that you may make sense of your own experiences.

    To put this differently: training for cognitive skills unlocks your ability to learn from reality. (The theory that we are using for this sort of learning experience is Cognitive Transformation Theory by Gary Klein and Holly Baxter, but don’t worry about this for now — you’re not expected to read the paper).

    You might’ve noticed that this is not the typical approach to teaching or learning — or at least not the normal approach that we are exposed to in schools. So why does it work?

    Here are three quick examples that will help illustrate this:

    Example One: Effective Feedback

    Here is a section from Jared Peterson’s Six Principles for Effective Feedback. Peterson works with Klein, the creator of the aforementioned theory of learning. Pay attention to what he says about feedback:

    Great process feedback doesn’t just correct actions, but deepens understanding by shaping their mental models—that is, their causal understanding of how things are related—enabling deep insights into different situations. The insight a trainee has into the causal relationships can be more effective than mere “muscle memory”.

    Consider Gary Klein’s friend “Jimmy” who asked for help training his backhand in racquetball. After watching Jimmy lunge, swinging wildly, never putting himself in a position to hit the backhand cleanly, Gary announced they were done playing. Instead, Gary would hit the ball to him, and Jimmy was to observe where the ball went. As Jimmy watched, he began to better understand the movement of the ball. Gary then asked him where he should place himself to hit it, which Jimmy dutifully identified. This improved his confidence, and when Jimmy asked to hit the ball again, Gary consented. This exercise turned out to be what he needed. With his newfound cognitive skill, Jimmy could now hit the ball consistently with his backhand. (Klein, 2011)

    Without a good mental model, experience teaches nothing and practice is a practice in bad habits. Without a causal understanding, we can’t identify what is relevant, and will mistake saliency for relevancy which results in practicing the wrong thing. Conversely, with a good causal understanding we can make sense of feedback. Good feedback simultaneously depends on and improves our causal understanding of the world.

    Two things happened simultaneously throughout StIM: first, we expanded your experience base by giving you new cases every week, and then we made sense of those cases together. Second, we prompted you to look for instantiations of these ideas we were covering in class in your own lives. The two sensemaking experiences were supposed to feed into each other.

    Notice that we didn’t give direct feedback on your sensemaking. (That would be less effective, as no instructor will be around to give you feedback when you reflect on your experiences). Instead we modelled the sensemaking in class, and trusted that you would pick up and incorporate relevant elements into your own practice.

    Example Two: Tactical Decision Games

    In the Marine Corps, instructors provide cognitive training by giving trainees battlefield scenarios (typically drawn in the sand, in a real sand pit built for this purpose) and asking trainees what they would do. After trainees give their responses, an experienced Marine officer present will describe what they would do. The important thing here is not the precise answer (the scenario is made up, and even if it isn’t, is unlikely to recur in reality); the think-aloud process is what is important.

    (If you want to experience such a scenario for yourself, you may do so here: Enemy in the Assembly Area. Make sure to read the answers).

    Notice that Marine trainees are still expected to train in hard skills: marksmanship, communications, equipment maintenance, troop movements, and so on. But the cognitive training is a very important part of their training because the Marine Corps cannot afford to have soldiers learn their tactical decision making skills from real battlefield experience alone.

    Notice also that many, if not most, of these battlefield scenarios are made up. Sensemaking from hypothetical scenario is still useful because it teaches the brain mental moves it may reuse in real world scenarios. The human brain does not have strict boundaries between real and imagined experiences. So long as the thinking task is similar to the real world task, even made-up scenarios can help. This was what we were trying to do with the private podcast exercises. The exercises were chosen for high cognitive fidelity to the real world task: you were often asked to effectuate from your own unique set of resources.

    Example Three: Judo

    Another way of talking about this form of training is that teaching novices to see like experts makes all downstream learning easier for the novice.

    Some of you know that I do Judo. Judo is a wrestling sport where if one player throws the other flat on their back, they win. The sport has many techniques (called throws) — over a hundred such techniques, in fact.

    Novice Judo players believe that Judo is a sport of throwing — that if you learn the throw very well, you will win. In reality, Judo is a game of grips. Most novice players are unable to throw their opponent because they do not nullify their opponent’s grips. As a result, their opponent (who still has a strong grip on their Judo uniform) is able to resist and stop the throw. A skilled Judo player nearly always nullifies their opponents’s grips before throwing, thereby guaranteeing that their technique cannot be blocked.

    One of my favourite things to do is to take a relatively junior Judo player and teach them to see gripping situations. This typically takes no longer than a few minutes each class, and consists of nothing more than standing by the side of the dojo, explaining (in some case demonstrating) a small handful of basic concepts, and then narrating what’s going on as other players are sparring.

    “Look at Bob,” I would say. “His head is being pushed downward by that strong collar grip behind his neck. Notice how he does not have a usable grip. He’s going to get thrown.” And two seconds later, Bob is flying through the air.

    I like doing this because there is such a large return on investment on the relatively little time I spend. Even if I never meet the player again, if I’ve done my job correctly, the player will be able to make sense of their own sparring experiences, and will be able to come up with solutions in response to their losses. In other words, I would have successfully accelerated their development. For instance, the player might reason:

    • “I was thrown by Jim because he dominated me and pushed my head down. But how did he do that? (After watching Jim sparring with others, or reviewing video footage): Oh, I see how he did that. A) I can copy him and do it to others, B) I need to figure out a way to prevent that from happening to me ever again.”
    • “I notice that all players who do throw X seem to set it up in the exact same handful of ways. I need to develop a way to stop each of those ways.”
    • “The coach is teaching this technique today. I think it might be a useful solution to that gripping situation I’m having trouble with. I will test this in sparring tomorrow.”
    • And so on.

    Like the Marines, the Judo player still has to learn hard skills: with over 100 techniques in the sport, and many more variations of each technique (plus counters and combinations), the player has a lot to learn on their way to mastery. But a player who understands why he or she is being defeated is able to do trial and error more effectively. Many novice players develop this understanding the hard way. What cognitive training is able to do is to offer a short cut.

    And so it is with this course.

    Many of the elements of effectuation must be put to practice in order to give you the results you desire. Specifically:

    • You still need to figure out what affordable loss means to you, and you must learn what it feels like when you actually take that loss. This allows you to build courage, for you realise that affordable losses are really ‘reversible decisions’ or ‘two-way doors’, to use Jeff Bezos’s terminology.
    • You still need to learn new ways to create partners, though you are now able to spot it in others and copy their methods for yourself.
    • You must get better at instinctively answering the four questions of uncertainty when taking action to generate information.
    • You must also get better at spotting underserved gaps in situations that may indicate potential demand.

    But the point is that you know all of this, and can pursue it in your own life, on your own time, with no added instruction necessary. You have a taste of how an expert entrepreneur thinks. Now you can flesh it out.

    The Learning Recap

    Let’s redo the course recap, but from the perspective of cognitive learning.

    1. In Week 1, the goal was to introduce to you a new lens for seeing the world. ████████.
    2. In Week 2, the goal was to really hammer home this new effectuation lens. We did this by ████████.
    3. In Week 3, the load-bearing case was actually ████████.
    4. In Week 4, we introduced you to the shape of demand. ████████.

    Admittedly, this last concept is a bit rushed. But we can’t yet think of a way to reliably get you to find an unaddressed gap (Merrick Furst, one of the authors of Heart of Innovation, notes that in Flashpoint most such gaps are only found only after six-to-eight weeks of continuous interviews — assuming it’s even found at all). Perhaps this is something we can experiment with the future.

    For now, you already have the tools to approach finding such gaps: make sure you have affordable loss built into your life. That way you can try, try, and try again (with no loss of enthusiasm, because that’s just what effectuating is like).

    One Last Note

    There is one final thing that I wanted to communicate during the course, but couldn’t find a way to do so. So here it is.

    Most programs about finding product market fit will focus on that last concept: on the nature of finding demand. This was true for Furst’s Flashpoint program, and it continues to be true in many similar programs in startup accelerators or incubators around the world.

    And for good reason: most courses about the idea maze are tied to some incubator-type thing. Incubators (or accelerators, or venture capitalists) don’t actually care if you, specifically you, succeed. They care that at least one company in the cohort succeeds in finding product market fit, so that they may invest in that company and generate a return from the investment. So that’s why they spend so much time on demand, even though most such programs do not seem to increase the odds of finding demand by very much.

    That is not the case with StIM. We want you, yes — that means all of you — to eventually succeed should you choose to pursue entrepreneurial paths. This is ultimately why we spent so much time on effectuation, with demand only in the final session of the course. Rhea and I cannot underscore this enough: finding demand is not deterministic. No matter how much you want it, and no matter how well equipped you are to recognise the shape of demand, there is nothing in this universe that can guarantee that you will succeed in finding demand when you set out to find it. Not every startup in Furst’s Flashpoint program found demand, after all. It is in this way that entrepreneurship is a lot like finding love, or like life itself — in both life and love you can never seem to make things happen exactly when you want it to. Life has a way of subverting expectations. Business is even worse: it is all the most uncertain bits of life tucked into one domain.

    When seen in this light, the only way to guarantee success is to have an approach that will ensure you survive over the long term … long enough to hit gold. No VC will underwrite this model, because no VC has a time horizon as long as a full career. But we are not VCs. And expert entrepreneurs — the ones who repeatedly start companies — are not necessarily those who will seek out VC money. They will do whatever works. Sometimes that means taking institutional money. Other times it means forgoing it.

    But I’m repeating myself now. You already know how to think like an experienced entrepreneur. You should be able to see when an effectual solution is possible. You know to take affordable loss bets, generate answers to four questions, to recruit partners, and to roll with the punches, whatever you may find along your journey. You already have a taste of effectuation. Perhaps you should try?

    I am hopeful that you will know what to do with the new lenses you now have in your head. Perhaps with time you will also believe — like I do — that many situations are akin to an orchard full of unplucked fruit, just sitting there, hidden by the fog of uncertainty. You just have to act to find out.

    I wish you the best of luck.

    Warmly,
    Cedric

    Why This Training Approach is So Weird

    I want to point out that the training approach we used here is actually really weird. If you’re used to mainstream approaches to practice, the idea that you can train cognitive skills separately from procedural ones is just … odd? Hell, the idea that a four week training intervention can change your ability to learn from your experiences in the real world (because you can now see like an expert entrepreneur) is very strange; we don’t typically think of training like this.

    The subfield most closely associated with such training approaches is known as Naturalistic Decision Making, or NDM. It was primarily funded by military and industrial use cases over the past three decades, and sits somewhat outside the mainstream psychology community. (It should be said, however, that heavy hitters like Daniel Kahneman believed that more academic psychologists should pay attention to Gary Klein and his colleagues, but I digress). To say that the methods of this community work is a bit of an understatement; these scientists do not get funded if their research is unsuccessful in delivering results in messy, complex domains.

    I once asked Brian Moon and Laura Militello, the hosts of the NDM Podcast, why NDM training approaches feel so different from what we normally think of as practice. I can’t remember what they said to me, but I remember feeling unsatisfied with their answers at the time. I couldn’t complain, though: they recommended Accelerated Expertise to me, and it was through their recommendation that I read the book.

    But I think I have an answer now. The reason this sort of training approach feels so weird is because it inverts the conventional approach to practice.

    The conventional approach to practice goes like this:

    • The coach pushes the trainee to the edge of their abilities, breaks skills down into subskills, and gives the trainee exercises for each subskill before building it back up to the higher-level skill. These exercises are typically accompanied with outcome feedback.
    • Through the practice of these exercises, the trainee will form the correct mental models for the domain. Such mental models are a by-product of the training.

    Sharp readers will notice the above points are some of the requirements of deliberate practice. They map to how we train in most sports, and how we are taught in school. Now compare this to a more NDM-style training program — that is to say, to an accelerated expertise training program:

    • The trainer (which may or may not be a human) focuses on ensuring the learner is able to see what an expert sees. This is the primary goal for all such training. ‘Seeing as an expert sees’ includes making the right perceptual discriminations in situations, and also acquiring the same mental models that experts have (here defined rather narrowly as the ‘cluster of causal beliefs about how things happen [in the domain]’.) In other words, the training succeeds if it results in the construction of the correct mental models for the domain.
    • Feedback should be judiciously given. This is important because we want to preserve the student’s ability to draw the right lessons when reflecting on their own performance, since they will be operating with no instructors in the wild. Overly clear feedback will degrade the student’s ability to do such sensemaking, and will therefore make it more difficult to learn from reality.
      • Conversely, helping students develop the expert’s tacit ability to spot cues and understand causal mechanisms will accelerate their ability to learn from trial and error in the domain. This is especially helpful if the domain is a constantly changing one.
    • When given, feedback is more likely than not to be process feedback as opposed to outcome feedback, since the goal during training is on developing the right mental models, not on executing the right procedural skill.
    • The coach also has the difficult task of dismantling incorrect mental models (sometimes known as ‘knowledge shields’) in order to introduce new, correct mental models. The latter mental models are often extracted from the heads of experts through something like Cognitive Task Analysis.

    In sum, conventional training programs target either declarative knowledge or procedural skill directly, with the belief that the student will automatically develop the right mental models given a high enough volume of practice. On the other hand, NDM-style accelerated expertise training programs target development of the correct mental models directly. This has the added benefit of aiding acquisition of procedural skill later, which may be accomplished through more conventional training methods.

    In Applications of Cognitive Transformation Theory: Examining the Role of Sensemaking in the Instruction of Air Traffic Control Students, Wiltshire et al attribute these differences to different academic lineages. CTT is fundamentally a Piagetian theory, whilst deliberate practice draws from Herbert Simon’s information processing paradigm (which makes sense; Ericsson was Simon’s student). The authors write:

    The Data/Frame Theory of Sensemaking and CTT can likewise be contrasted with learning theories rooted in the information processing, or cognitivist, paradigm. CTT and Klein’s sensemaking theory are constructivistic in nature— learning and cognition are described as actively guided by metacognitive processes such as seek and interpret own feedback (according to CTT) and judge frame plausibility and gauge data quality (according to the sensemaking theory). In contrast, the dominant theories of cognition and learning tend to describe passive, automatic processes. These process theories offer connectionist and spreading activation explanations of learning that involve creating, strengthening, and weakening connections and rules [e.g., Anderson’s (1983) ACT and Shiffrin and Schneider’s (1977) theory of automatic and controlled processing]. Ericsson’s (2006) deliberate practice strategy for expertise acquisition makes similar assumptions. Using this strategy, the learner strives to actively seek out challenging tasks and maintain focused effort on improving in order to escape performance ruts and escalate his or her expertise. The learner does not escape those ruts through selfinitiated adaptations of knowledge, skill, or learning strategy, however. Instead, deliberate, focused practice at levels of gradually increased difficulty automatically produces the adaptation and associated improvements in skill—that is, the learner does not actively question or shape the knowledge or skill he or she is developing.

    (…) Klein and Baxter argue that sensemaking, and thus learning, should be driven by the learner. They argue that frames and mental models— that is, knowledge abstractions that shape understanding and guide action—can be actively renovated by the sensemaker or learner (bold emphasis added). We may tend to become invested in our frames and mental models, and may consequently be reticent to make major changes to them (e.g., Feltovich, Spiro, & Coulson, 2001); but doing so is both possible and necessary for adaptive sensemaking and continued learning (bold emphasis added).

    Where does this leave us? I think this fundamental difference explains why CTT (and CFT) training approaches are considered accelerated expertise training: in addition to speeding up skill acquisition, the ability to sensemake effectively unlocks your ability to learn the right lessons from your experiences. This in turn allows you to adapt to whatever you find in reality.

    You can see how this might be useful. A warfighter dealing with improvised explosive devices (IEDs) cannot rely on ‘going back to the classroom to learn about new IED strategies’ to survive. He or she must learn to deal with an ever-changing adversary. Here, adaptive skill is the difference between life and death. Acquiring the mental models of more experienced soldiers (who ‘have a bad feeling’ about potential IED risk) makes it more likely for them to sensemake successfully and therefore adapt to evolving IED emplacement tactics. The same goes for an investor in a changing market environment, or a business operator in a new country. Deliberate practice has little to no traction with changing skill domains. CTT and CFT training programs do.

    This is truly no small thing. I’ve written elsewhere that research now tells us that ‘adaptive expertise’ — the ability to adapt to novel situations in your skill domain — is what differentiates true experts from the ‘merely’ good. The implication is that true ‘mastery training’ is actually training that targets adaptive skill directly, instead of routine performance. So: any training that unlocks your ability to learn from trial and error cycles in a constantly changing domain is exactly the sort of thing that gives you a shot at true mastery.

    And I finally understand how this works.

    Room for Improvement

    CTT has recommendations for four aspects of training: Diagnosis (of the student’s current state), Learning Goals, Practice, and Feedback.

    In Cohort Two of StIM, we did a decent job of improving three of the four aspects of training:

    1. Rhea and I were clearer on the learning objectives for the entire course: we wanted students to see effectuation in their own lives and in any story of new business. (The framework, as mentioned before, is universal). More importantly, we wanted students to see that effectuation is a possible path in their careers.
    2. Practice: we gave students a private podcast feed with hypothetical scenarios for them to effectuate from. This served as lightweight simulations, designed to provoke construction of the thinking style.
    3. Feedback: we modelled sensemaking feedback in the live sessions, when we got students to share their answers to some (though not all) the scenarios. Much of the sessions were also built around making sense of the cases we gave students each week. We were careful not to give direct feedback to several of the scenarios.

    So far, so good.

    But the one thing we could improve on is Diagnosis. We had no way of evaluating the current state of students’s mental models, nor could we measure how well effectuation as a thinking style was taking. I don’t actually know how to do this right now, but I’m sure I’ll come up with some ideas over the next few months.

    I have more to say about CTT. I’ll do a follow-up piece explaining what the theory says, why it works, and how to use it. More folks should know about this, I think, so they may use it in their lives and in their training. Stay tuned.

    Be Informed When the Next Cohort of Speedrunning the Idea Maze Opens

    We’re not sure when we’re running the next cohort of Speedrunning the Idea Maze.

    Sign up for the waitlist below, and we’ll inform you when a new ochort opens.

    Originally published , last updated .

    This article is part of the Expertise Acceleration topic cluster. Read more from this topic here→

    The thought of business school make you go ‘eww’?

    You’re in good company.

    9,000+ investors and operators read Commoncog to sharpen their business acumen ... WITHOUT going back to school.

    Sign up for our newsletter and get a weekly dose of good business thinking (no BS guaranteed):

      Member Comments