This is part of the Operations topic cluster, which belongs to the Business Expertise Triad.

This is part of the Market topic cluster, which belongs to the Business Expertise Triad.

 Members only

How to Run Smart Experiments When You Just Don’t Know

Feature image for How to Run Smart Experiments When You Just Don’t Know

Fan of great business?

Join 8,000+ sharp investors and operators like yourself, and we'll send you a collection of Commoncog's best articles right away:

    How all good entrepreneurs run experiments at the earliest stages of a business, and how you can use this approach for your own career.

    A couple of years ago I had a Commoncog reader join me as a summer intern because she wanted to observe me doing ‘startup stuff’. The ‘project’ I was working on was to figure out a scalable way to create new cases for the Commoncog Case Library. At the end of the internship she gave me a couple of notes about the experience, and we met up for a meal to discuss those notes. At the top of her list was the following observation: “I’m surprised you didn’t have a hypothesis before you ran this!”

    I was taken aback. I was then in the midst of implementing — and then publishing — a series of essays about Becoming Data Driven in Business. In the series I argued for creating hypotheses and then testing them in tightly-scoped, iterative bets. I wrote about how the Amazon Weekly Business Review was less a metrics review meeting and more a way to generate knowledge about the ‘causal model’ of a business. I also explained how the ideas of statistician and ‘business philosopher’ W. Edwards Deming (from which the WBR inherited from, by way of Six Sigma) gave you the tools for rigorous, single subject studies.

    And yet here I was, running a full blown experimentation loop over the summer, without any explicitly stated hypotheses. I didn’t have pre-registered input/output metrics, or even a planning doc. What was I doing?

    I can’t remember what I told my intern. I think I mumbled something about how, at the earliest stages of a bet, when you don’t know anything about anything, you can’t generate hypotheses. Instead, “you just want to throw shit at the wall, and see what sticks.”

    All of these things were true, for I felt in my bones that generating hypotheses was the wrong thing to do for this situation. I had never hired a large group of writers all at once, for instance. I had no idea what problems would crop up, or what could go right with my approach. But I lacked the language to describe why I felt these things.

    Until now.

    What is Good Experimental Design?

    In school, we are taught that good experiments should have solid experimental design and a pre-committed, falsifiable hypothesis. This attitude tends to dominate when executing in big companies. For instance, if you are a conscientious analytical type and you’re used to large company execution, it might feel weird for you to launch a new project, product, or startup without articulating some hypotheses or setting some goals before you do so!

    Here’s a more concrete example. My friend Crystal Widjaja has a framework for coming up with new conjectures in her Reforge course Mastering Product Analytics — an approach I’ve adopted for my own operating bets. She argues that good ‘product conjectures’ have three properties:

    • They are debatable (that is, it is not immediately obvious that it would work).
    • They are testable (the bet is verifiable using some kind of experiment).
    • They are meaningful (the outcome of the bet, if it works, generates a meaningful impact on some output that leadership cares about).

    Widjaja’s criterion is excellent and I’ve used it in the months since she first taught it to me. But I’ve noticed that it doesn’t work at the earliest stages of an uncertain thing. What do I mean by uncertainty? Loosely speaking, ‘uncertainty’ is when you don’t know and can’t even imagine what all the outcomes are. Not knowing about outcomes makes it hard to come up with conjectures in the first place. For instance, let’s say that your boss asks you to scale up an existing social media campaign, going from $5000 in Facebook ads to $50k in weekly spend. Not everything is known, but the bits that are unknown are tractable enough that you can probably figure out a game plan, especially if you have some experience with marketing. Now compare that to your boss asking you to move to China next week — a country you’ve never been to — to set up a new office to launch products from, something that nobody in your company has done before. One situation is vastly more uncertain than the other. There are more things that can go wrong (or right!) with the China project than with the ad spend project.

    My point: it is difficult to do conventional experimental design when you know very little about the domain you’re operating in. The more uncertain situation calls for a different approach.

    So what do you do?

    It should not surprise you that good entrepreneurs are very good at this stage. What they do is less like an ‘experiment’ and more like ‘throwing shit at the wall and seeing what sticks’. A more acceptable way of describing what great entrepreneurs do is that they ‘take action to generate information’. But there is a method to their madness.

    The short version is that entrepreneurs take action to generate answers to four questions:

    1. What are the possible outcomes?
    2. What are the further actions?
    3. What is the value of each outcome relative to the others?
    4. What causal relationships exist?

    Or, another way of thinking about this is that folks who are good at high uncertainty execution are more skilled at making sense of the splatter pattern after they’ve thrown something at the wall.

    In other words, their skill doesn’t come from good experiment design! Their skill comes from the sensemaking they do after they’ve taken action to generate some information.

    Let’s go through the four questions quickly, and in order.

    Originally published , last updated .

    This article is part of the Operations topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→

    This article is part of the Market topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→