What we've learnt from creating a simple CFT case library for business.
Last week, we ran an alpha test for a Cognitive Flexibility Theory-inspired business case library. As of press time, I have a handful of responses via the exit survey, and a smaller set of qualitative feedback (which is a fancy way of saying free-form complaints via chat, email and other mediums) ... all of which are decidedly mixed.
I thought I’d write a short note on what we’ve learnt.
As always, these lessons are tentative — when you’re iterating on a product, you generate information first, use the initial feedback to iterate for more information, and then make sense of everything a few cycles later. This note will include only the most conservative lessons that I can state with some confidence; everything else is subject to reinterpretation as the experiment continues.
Why is this useful? Well, if you’re interested in constructing a Cognitive Flexibility Theory-inspired case library of your own, the information generated from this alpha test might be of some use to you.
In the end, it’s all product development
A brief recap: the Commoncog Alpha test consisted of eight case studies on the concept of ‘scale economies’. Cases were sent via email once a day, every day, for eight days. Readers were also sent an initial ‘here is the concept’ setup email, and a concluding ‘here are some themes; give us feedback’ exit email.
As far as products go, the alpha test was a relatively simple one. But squint a little and you should be able to see the shape of some natural product tensions that emerge from such a simple structure:
- How do you write the cases? How messy or focused should they be?
- How often should you send the cases? Is once a day too much? Is once a week too little?
- How much entertainment value should there be, assuming that people are busy and that learning takes mental effort?
- Should cases illustrate multiple complex concepts, and if so, should those concepts be called out?
There are, of course, other questions that we didn’t even think to ask until after the test. But I want to talk about a specific event which I thought captured some of the tensions we had to navigate (and that, really, anyone with product experience should be familiar with):
About halfway into the alpha test, a reader pinged me to say that the case studies weren’t doing it for her — she was expecting management consulting-style cases, i.e. tightly-written cases that were written to illustrate specific nuances of a particular concept. This was notable, mostly because CFT argues against tight cases for complex concepts — instead, the theory recommends exposing learners to complex real world stories that contain multiple concept instantiations.
You’d think that the response would be simple: that we should change the format of the case library to focus on calling out specific concept instantiations. But there are actually a number of potential solutions:
- The fact that this reader thought the cases were too messy for their liking might indicate a failure of onboarding. Perhaps one solution is to rewrite the way case sequences are introduced to focus more on the why of the theory.
- Or perhaps the reader represents a specific type of worldview (e.g. all examples are disposable; concepts are more important than cases) and future marketing efforts should be designed to exclude customers with such worldviews.
- Or perhaps the cases really are too messy, and we should rewrite them to be tighter. A problem that emerges from this is that tight cases can’t easily be reused across multiple concepts, which in turns produces a scaling problem — but that’s a problem for a separate trial and error cycle.
When you boil it down, these are basically problems of product management. I don’t know the answers to the above questions, or even what a final solution might look like — but I’m fairly certain that the solution is out there. I’m just going to take a couple of trial and error cycles to get to it. To put it more crudely, a facet of product skill lies in picking the right set of experiments, in order to minimise the number of trial and error cycles involved. And of course there’s a probabilistic element here: sometimes you’re lucky and you unfurl the right set of experiments to get to the first workable solution; sometimes you’re not and you have to do way more than you originally expected. But speaking more generally, an experienced product person is likely to spend less cycles than a novice.
I wish I could say that I had this skill, but the truth is that I don’t: I’m walking this particular product space for the first time.
Originally published , last updated .