This is part of the Operations topic cluster, which belongs to the Business Expertise Triad.

This is part of the Market topic cluster, which belongs to the Business Expertise Triad.

Putting the Jobs to be Done Interview to Practice

Feature image for Putting the Jobs to be Done Interview to Practice

Table of Contents

Fan of great business?

Join 8,000+ sharp investors and operators like yourself, and we'll send you a collection of Commoncog's best articles right away:

    Note: This is Part 7 in a short series of essays on Understanding Customer Demand. Read Part 6 here.

    At the end of last year, we ran a Jobs to be Done (JTBD) interview process with Commoncog members. Much has already been written about the interview method, but I thought it would be useful to describe what we’ve learnt from doing it. This includes things that we struggled with, things we found oddly tricky, and things that we wished someone had told us earlier.

    At this point in the Understanding Customer Demand series, you should already know what the JTBD framework is, and how it fits into the arsenal of demand thinking that we’ve covered. This essay is going to be a short one: a field report from practice.

    Why Is This So Hard to Do?

    The first thing you’ll discover about the JTBD interview is that it’s really hard to do. I think this is obvious from reading between the lines of all the major JTBD texts: Demand Side Sales 101 from Bob Moesta is filled with sidebars about how things can go wrong with the interview process; the team over at the Re-Wired saw it necessary to release The Jobs to be Done Handbook — a 66 page booklet designed to be skimmed right before an interview — which gives you a hint that the interview process is not as straightforward as it seems.

    Why is this so difficult? I think the main reason is that the JTBD is a form of ethnographic interview, and if you’ve never done ethnographic research before, you’d have to pick up the basics as you execute.

    If we take a step back, the JTBD interview is actually really simple. It is just a way of getting your customers to tell you the story of how they bought your product.

    The problem with it is two-fold. First, most customers can’t remember how they bought your thing. Second, the human mind is not designed for self knowledge. Humans will make up reasons for why they do things, and you have to catch them when they do. Skill at the JTBD interview is about getting better at both things separately.

    Of the two, the first problem is easier to solve. The JTBD texts have a list of tricks you can use to get your customer to remember. The most basic one is to never interview a customer who bought too long ago. At Commoncog, we didn’t take this seriously enough, and so we ended up with a bunch of interview transcripts that were useless for our purposes.

    In general:

    • Don’t interview anyone who bought more than 1.5 years ago.
    • If you are a JTBD novice, you’re going to struggle with those who bought between six months and 1.5 years ago.
    • Anybody who purchased less than six months is going to be relatively easy.
    • The more recently they’ve purchased, the easier your job will be.

    This leads to a counter-intuitive recommendation: at the beginning of your JTBD journey, limit your interviews to customers who have purchased less than six months ago. As you get more skilful, you can increase your filter to customers who have bought a year or so ago. Start with the easy ones, the ones who bought most recently, and then progressively step up the difficulty.

    The authors give you a handful of tips for dealing with customers who can’t remember:

    • Take the pressure off — Open every JTBD interview with “Imagine that I’m filming a documentary. I’m just trying to understand how people buy <insert product name>. Anything you can tell me here is going to be useful. Don’t worry if you don’t remember — that’s perfectly normal. There is no wrong answer.”
    • Work from the purchase backwards — The JTBD framework is built around the ‘timeline’ — that is, the series of events that led to the purchase. To make things easier, the authors give you the first question you should ask in every interview: “Why did you buy <product name> and where were you when you bought it?”
    • Ask for specificsAlways ask for the name of the dog. If they mention that they purchased at the kitchen table, ask them if they did it in the morning, or at night. If they tell you they’re married, ask for their spouse’s name. If you hear a dog barking in the background, ask for the name of the dog, and whether said dog was present when they made the purchase. Asking for concrete detail helps with recall.
    • Use environmental cues to hammer down the timeline. If they say “I think I bought it last year” ask them: “What was the weather like when you bought it? Was it hot out? Cold? Was it dark? Rainy?” Or “You mentioned that you fly to France every summer. Was this purchase before the trip, or after?”

    All of these tricks help, and I’ve found that you can get quite good at them in just a few interviews.

    The second problem — that humans are not made for self knowledge — is the harder problem to tackle. At various points in your interviews you’re going to want to slow down and dig into a specific event, in order to isolate one of the triggers on the JTBD timeline. A strong assumption behind the JTBD framework is that “things don’t happen randomly” and “there is causality in human behaviour.” Now, whether or not there is true causality in human behaviour is a matter for deep philosophical debate. But for the specific context of this interview, it is helpful to believe that this is the case, and therefore it is helpful to act as if it were true.

    Pushing for causal factors is tricky, because you don’t want to push too hard. Skill at this bit is:

    1. Noticing that something they’ve said doesn’t make sense, or doesn’t tally with other pieces of information they’ve given you earlier in the interview.
    2. Pointing that out, gently, so you can dig deeper.
    3. Knowing when to change topics so they don’t feel pushed and become defensive.

    I don’t have a good answer for how to get better at this, beyond “put the reps in, and get feedback from your team.” For instance, early on in our project one of my teammates pointed out that I should have the date of purchase ready before each interview, since Commoncog sold subscriptions and purchase dates were trivially easy for us to track. While listening to my interview recordings, he had noticed that there was often an awkward bit where I would be pulling up the details of their subscription. This was a waste of time. I had already noticed this, but I added that task to my pre-interview checklist immediately after receiving this feedback.

    The authors of JTBD have another standard recommendation that I didn’t take seriously, but should’ve: they recommend that you conduct the interview with one other person, as a pair. That is: a team of two interviewers for each interviewee. The reason is that running the JTBD interview is very cognitively demanding, and it helps to have a second brain catch you in case you miss something. Unlike with Sales Safari, if you forget to ask about something in the JTBD interview, it’s likely you’ll have lost that insight forever.

    This complicates the interview process, though: it now means that you have to find a partner, schedule them for each and every customer interview, and take the time to debrief, reflect, and improve as a team.

    Practice Before Each Interview Project

    One thing we did do — that is considered best practice amongst JTBD practitioners — is that we ran practice interviews with each other before executing it with our customers. I highly recommend this. Pick a friend or a family member, and then pick a purchase that they made not too long ago. (Make sure the purchase is not a gift, nor an impulse buy — these are two scenarios where the JTBD framework just doesn’t work.)

    JTBD interviewing skills do atrophy from lack of practice. In this they are like riding a bike, or playing tennis. It’s a good idea to do practice interviews before every major customer research project. I learnt this the hard way: I ran a JTBD interview a couple of weeks ago, after an eight month break, and found myself horribly rusty.

    I’m not making that mistake again.

    The After Interview Review

    Unlike Sales Safari, JTBD interview analysis is comparatively easy to do. In Sales Safari, a huge part of the difficulty with the technique is divining intention and psychology based on a free-form interview. With a JTBD interview though, the analysis is almost trivial.

    You are expected to fill in the following format:

    Recall that the JTBD timeline looks like this:

    (Source: The ReWired Group’s page on JTBD).

    For each of the points in the timeline, you are expected to jot down the four forces that acted on them in that moment. This is no more difficult than a listening comprehension test — assuming you’ve done the interview correctly.

    Unfortunately, if your interview was badly done, you’ll find that you can’t fill in the template. And there’s nothing you can do to salvage it — you’ll end up with a half-empty, partially completed timeline. This is why getting the JTBD interview right is so, so important.

    Let’s take a step back, though. It’s worth asking why qualitatively customer analysis is so difficult. At this point in the series, we’ve already taken a look at one such method that I’ve described as exceedingly tiring: Sales Safari. What is true for Sales Safari analysis is still partially applicable to JTBD interview analysis.

    Sales Safari is difficult because everything that a customer says has two kinds of informational value:

    1. Content value — the information value of what is being said.
    2. Positioning value — the information value of why something is being said, and how it is being said.

    The most revealing, most valuable bits of any customer research interview lie in the positioning component of what is said, not the actual contents of what is being said. Divining what is unsaid — and why — takes skill. It is exhausting in the same way that empathy is exhausting. In fact, one of the best ways you may evaluate skill at demand is to observe how much positioning value someone is able to extract out of a customer call.

    We may get more specific about what positioning value looks like, of course. When you’re listening to an interview, ask:

    • Why are they saying this?
    • What words are they using? Why are they using those words, instead of alternatives, when describing their situation? What does this tell you about them?
    • What psychological state are they in right now? What psychological state were they in back then (during the events that they are describing to you)?
    • How confident do they sound? What is their tone right now? Is that normal? Does it match their situation?
    • What kind of person are they? What is their job? Their family life? What is the situation that they find themselves in at the time of purchase?

    This is still only half the picture. The other half comes from noticing patterns across customers. Things like “wait, isn’t it odd that five customers have said the exact same thing? What’s going on here?”

    Listening for positioning value and analysing for patterns across customers is what makes qualitative customer interviews difficult. JTBD is slightly easier, of course, because it merely focuses on explicating a buyer’s timeline. But to get the most out of this process, you still need to analyse a little for why customers are doing the things they do — and this analysis is where most of the cognitive costs will lie.

    What Did We Learn From Commoncog’s JTBD Project?

    It’s worth asking what we learnt at the end of this three month project.

    What did our JTBD interviews uncover? We learnt a number of things, but most of those findings will require too much context to understand. Instead, I’ll describe just one finding to you, as an example of how the JTBD framework can be useful — especially when used in combination with a data capability.

    Commoncog runs a membership subscription, and — roughly speaking — has two parallel paths in its growth loop.

    In the first path, a reader signs up for the newsletter, and becomes a regular subscriber for some indeterminate amount of time. At some point, they become a paying member.

    The first path: newsletter sub, then member

    In the second path, a reader discovers Commoncog, and becomes a repeat reader. They return to the site over a period of many days, then weeks, then months. They never once sign up for the newsletter. After some indeterminate amount of time, they become a paying member.

    The second path: repeat reader, then member

    We know both paths exist because of a) common sense, but also b) we’ve had multiple random conversations with members over the years.

    What we didn’t know — or had even bothered to find out — was how many paying members became paying members through the first path vs the second. I assumed that more members came from the first path (reader → newsletter subscriber → paying member) than the second (reader → repeat reader → paying member), because I was influenced by the growth of newsletter platform Substack, and assumed that whatever applied to newsletters also applied to Commoncog.

    Boy was I mistaken.

    During the JTBD interviews, we began to realise that members who became paying customers through the first path experienced a different timeline from readers who became paying customers through the second path.

    To cut a long story short, members who had been newsletter subscribers became a member in the following manner:

    1. They found Commoncog through some source.
    2. They signed up for the newsletter and became regular readers of the newsletter.
    3. Commoncog alternates between a free article and a paywalled article every week. At some point they think “hmm, I’m reading all the free articles, but I wonder if I should pay for the paywalled articles at some point.”
    4. After a period of months — sometimes years — Commoncog publishes a paywalled article that just happens to be about something they’re dealing with at work or in their lives.
    5. This serves as the buying trigger and gets them to take out the credit card and buy.

    On the other hand, members who become paying customers through the second path went through the following journey:

    1. They found Commoncog through some source.
    2. They binge read a bunch of free articles on Commoncog.
    3. At some point they discover a post that belongs to a series of articles, and then they read every single article in that series that is not paywalled.
    4. They bookmark Commoncog and become repeat readers, returning to the site at a semi-regular cadence, over time.
    5. At some point — typically not too long after initial discovery — they decide that they really really want access to some of those paywalled articles that they’d skipped over in a series. They decide to purchase.

    In fact, this broad pattern of behaviour came up again and again in the set of customers we talked to. Intriguingly, nearly every member who experienced the second path

    bought because of a paywalled entry in a series.

    More accurate second path: repeat reader, then member

    At this point we needed to put some numbers around this. My wife, Hien, is a data analyst, and she helps us put together our WBR. She ran the numbers.

    We learnt that 56.5% of members came from the second path. Only 43.5% were newsletter subscribers before purchasing.

    That meant that our causal model of Commoncog’s growth was wrong. More people became paying Commoncog members after hitting a paywalled article (in the context of a series) than they did from becoming a free newsletter subscriber.

    Huh.

    Hien then created the following visualisation:

    This was interesting, for a different reason.

    In the JTBD framework, purchases are made because of a buying trigger — that is, some … thing that gets the buyer to take action. The most common form of a buying trigger is a time wall, which is basically a deadline. This time wall might be manufactured (“10% off until Friday!”), or it might be natural (“I need a new laptop before my trip to London next week”). It might also be made up (“I will buy this course before the end of the day.”)

    What this graph shows us is this: in any given month, the number of paying members who bought because of a paywalled article would typically outstripped those who bought after becoming a regular newsletter reader. But in two of those months, newsletter readers who became paying members far outstripped the number of new members who were website readers. Those two months corresponded to the two times we increased prices of the membership program. In both those cases, I signalled the price increase a month before the actual prices went up.

    In other words, the price increase served as a time wall. This wall worked on both website readers and newsletter subscribers … but conversion was higher from newsletter subscribers. One conclusion might be that Commoncog’s email list contains a list of potential customers in passive looking. Creating a time wall should nudge these individuals towards a buy decision.

    How is this Knowledge?

    The purpose of data is knowledge. Knowledge is defined as ‘theories or models that allow us to better predict the results of our business actions.’ It is not enough to have findings, in the way that I’ve described, above — we want to know how it can be predictive because we want to be able to act on it.

    So how is this knowledge?

    One piece of knowledge is clear: more people become paying Commoncog members through the second path, so we should focus on that. Notice that this may be stated in the form of a prediction: “if we focus on the second path, we should see higher conversions to paying members.”

    But: how? There is, of course, an obvious hypothesis: because the majority of repeat readers paid after encountering a paywalled article (in the context of a series), we should modify the Commoncog site to increase traffic and ease discovery for published series. This may also be stated as a prediction: “if we increase visitor traffic to the Commoncog series, we should see an increase in member conversions, after a delay of two months.”

    (Amongst other things, this explains the creation of the Commoncog syllabus page).

    Notice that we still have to test this. All we have right now is a correlation. Sure, this is a strong hunch — so many people seem to have bought in the exact same way! — but we can really only know after we’ve tested against reality. Causality only exists if you’ve made a prediction and then verified that the prediction is true.

    There is one other hunch that we can chase down. The price increase and subsequent spike in purchases tell us that a good number of mailing list subscribers are in the ‘passive looking’ phase of the buyer’s journey. They’ve considered becoming a Commoncog member in the past, but have not experienced a buying trigger. Creating a time wall (such as sending an expiring discount to the list) should result in a burst of signups.

    Of course, whether we want to do that is a different matter. Commoncog doesn’t discount right now. Are there other ways to create a time wall?

    Wrapping Up

    I want you to notice a couple of things. First, this finding started out as a qualitative observation from executing a JTBD interview series. We then firmed up the hypothesis by running the numbers — using ‘data as an added sense’.

    Then, we turned our findings to action by asking “how is this knowledge?” Which is shorthand for: “what predictions can we make, based on these findings?”Asking this question naturally leads to: “how can we test these predictions to see if they hold up?” — which leads to experiments.

    (Note that we run our own Amazon-style Weekly Business Review, so these findings were presented in the context of a broader metrics review. That particular meeting — the one in which we presented these findings — was probably the most interesting one last year, because it caused us to do a major update to the causal model of the business in our heads.)

    We may generalise all of this into the following series of steps:

    1. Figure out how your customers buy, either by using data, or via qualitative interviews (Sales Safari, or JTBD, or DPIs).
    2. Ask: “how is this knowledge?”, which forces you to come up with predictions. (“Doing X will result in Y after Z time”).
    3. List out a number of experiments for these predictions and prioritise those experiments. This could be modifications to increase conversion or engagement at the product level, but it could also be experiments targeting marketing channels.
    4. Instrument and execute.
    5. Reflect on the findings by pulling data or by running another round of interviews with a batch of new customers.
    6. You are back at Step 1.

    Long-term readers will notice that this is the Plan-Do-Study-Act loop, which we’ve covered extensively in the Becoming Data Driven Series.

    But enough about that. Let’s pull back to talk about demand.

    In this essay I’ve shown you a handful of things:

    • First, we talked about what makes the JTBD interview so difficult to do. I listed a bunch of things I’d wish we’d known before we went down this path. The implication, though, is clear: these difficulties are worthwhile to work through, because of what you get at the end: a tool that tells you how your product is actually bought.
    • In passing, I gave you a brief overview of the JTBD interview format itself. I should note that my descriptions are not complete — if you want to put this to practice for yourself, you should refer to more complete resources. (I recommend both Demand Side Sales 101 and The JTBD Handbook. Both are very short books — you should be able to finish them in a single weekend.)
    • Third, I walked you through a concrete example, using one finding from our experience running the JTBD interviews on Commoncog members.
    • Finally, I pointed out that operationalising this is a mix of interviewing for demand and measuring using data. This pattern is broadly applicable to anything you wish to do to improve in business; it is a great example of “data does not stand in opposition to qualitative judgment” that I’ve tried to communicate over the course of the Becoming Data Driven Series.

    This essay has gone on for long enough.

    I think it’s about time that we wrap up the entire series. I’ll see you in the next instalment — which will be our last.

    Originally published , last updated .

    This article is part of the Operations topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→

    This article is part of the Market topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→

    Previous Post

    ← Tycoon Succession

    Member Comments