This is part of the Expertise Acceleration topic cluster.

Transcript of David MacIver on Life Skills for Programmers

Feature image for Transcript of David MacIver on Life Skills for Programmers

Table of Contents

This is the full transcript of Commonplace Expertise #2: David MacIver on Life Skills for Programmers. It was produced with the help of Guan Jie Fung.

Cedric Chin: Welcome to Commonplace Expertise, where we take a look at the expertise that exists in the heads of the most interesting people around us. These conversations are meant to help you make better business and career decisions and I hope you find them useful. Today, I'm speaking to David MacIver.

David is most known for pushing the adoption and ergonomics of property testing and software with his testing library, Hypothesis. Hypothesis is well-regarded and widely used in the Python programming language community and it introduced a handful of innovations that are now quite widespread in the practice of property testing. You hear more about Hypothesis during the podcast, as we talk about what he's learned, pushing the boundaries of a domain.

Then we shift gears to talk about his coaching practice. David specializes in helping programmers with self-improvement, more effective learning, and developing soft skills, which many computer programmers are likely to struggle with in ways that may limit their careers or their personal development. I hope you enjoy my conversation with David MacIver.

What Hypothesis Is

Cedric Chin: Hi David. To get started, I was thinking about how to open up this podcast and I thought a good way is to ask you about Hypothesis. Like, what is Hypothesis and how do you get into building this thing? Which in many ways, for those of you who don't do software, maybe explain it for those people and then talk about the story of how you got started building Hypothesis.

David MacIver: Sure, yeah. So the explanation I usually give for non-software people is I start by telling them a little bit about software testing. Which is when you've written a piece of software you want to know that it does the right thing. And one way you can do this is you can basically just manually poke about it and see whether it looks roughly right.

But then you make some changes and you have to now redo the entire manual process over and over again, every time you make some change. This is slow and tedious. So what we do is we write software tests, which are essentially just a little program that automates the thing that a human would do. It runs through a set of predefined actions on the software and it checks that certain things have happened.

So for example, it says login to the website, click this button, you should see this piece of text on the screen. That sort of thing. This is fairly simplified and like a lot of software tests don't run the full application stack, but that's the basic idea.

And the problem with these software tests is that they don't really help you out in any way other than what you would already do by hand. And so Hypothesis is a piece of software that helps you write software tests that in some sense, actively go out looking for bugs in the software, by letting you generate a variety of actions that the software can perform on this.

So for example, with our login to the website and click this button and you should see this text on the screen, you might want to vary the details of the user you're logging in as you might want to try a variety of different names for the user. You might want to make sure that it still works if they've got accents in their name or Chinese characters or the like.

It can get more complicated than that. But basically Hypothesis generates data for use in software testing and potentially generates a variety of different actions and tries to use this to explore your software in a way that it can find bugs that you wouldn't have found by hand without a lot more manual testing than you otherwise want to do.

The Story of Hypothesis

Cedric Chin: What made you interested in this particular problem, in the beginning?

David MacIver: So, it happened more or less by accident. Hypothesis is descended from a software library, which does similar things in another programming language. The other program language is called Haskell and the original library that Hypothesis is based on is called QuickCheck.

I hadn't used QuickCheck very much. But I had used something else called ScalaCheck, which is QuickCheck, but for the program language Scala. And so I was already reasonably familiar with the basic ideas. I think I'd been talking to someone about property based testing, which is what this type of testing is called, at a conference. And shortly after that, I was moving jobs and I needed to learn a new programming language: Python. I wanted a toy project to write in Python and a QuickCheck port seemed like a reasonable thing to do. So I wrote the very first edition of Hypothesis which then sort of languished in not quite obscurity for a few years because I'd left that company. I had moved on to Google where I couldn't really work on it. And in this meantime, people started using — I'm sorry, this is a very long story by the way.

Cedric Chin: No, this is a good story.

David MacIver: Okay. In the meantime, people started using Hypothesis because there were a bunch of other QuickCheck ports to Python, but none of them were very good.

Neither was Hypothesis. Hypothesis was very bad, but Hypothesis had one fairly crucial feature that none of the others had, which is what's called shrinking in QuickTrack language, or test case reduction, to use the general term. Which means that when Hypothesis first finds a bug it will have like, here is the username that triggers the bug in the login forums. Then the first thing it tries might be this horrible 200 character or nonsense string.

And what will happen with Hypothesis is that it won't stop there. It will say, okay, what part of this string actually matters? And try to make it smaller and simpler. So eventually, you'll probably end up with a name that just has the one character that causes problems.

Cedric Chin: Just a sense of the timeline, what year was this?

David MacIver: So that would have been 2013, I think.

Cedric Chin: How many years after you started the library?

David MacIver: No, I'm sorry. 2013 was when I started the library. But Hypothesis had shrinking from the very beginning.

Cedric Chin: Wow.

David MacIver: And the reason was basically because I had already used ScalaCheck extensively. So I knew just how annoying property based testing is without shrinking.

To be clear, shrinking is not my invention. Shrinking is something found in QuickTrack and ScalaCheck and most of the good QuickCheck ports, but shrinking is also quite hard to implement so many of the bad QuickCheck ports don't have it. And they're sort of borderline unusable as a result.

Hypothesis's Contribution to Property Testing

Cedric Chin: So I'll tell you why I'm interested and why I started the conversation with your work with Hypothesis. In our conversations, you've mentioned that in many ways, Hypothesis, you have moved to the very edge of property testing. And in many ways the work that you've done with Hypothesis, has pushed forward, I think, the adoption of property testing in general in many programming languages.

And some of the techniques that you came up with during the work that you did on Hypothesis is now sort of like— they're in the air. And so I want to dig into what that was like. Could you talk a bit about what were your contributions and what was the state of the art at the time, and then where do you push it towards?

David MacIver: Sure. So I could say like in 2013 Hypothesis was very much not the state of the art. It had a weird API, it was kind of clunky, and it had all of the limitations that other implementations had.

So the thing that I've pushed the state of the art on, is basically this shrinking aspect of it. And the problem that you have in a lot of—. There are two things in Hypothesis that are, I think, state of the art and they are both usability improvements rather than—. Well, both usability improvements. I don't want to couch that because this is why Hypothesis has driven a lot of adoption with our property-based testing. I think certainly a lot of adoption in property based testing in Python. Like, I think it's fair to say that property-based testing would not be a thing we used in Python if it weren't for me.

But one of the problems you have in property-based testing is that you generate an example, a test case for testing the software application, and you need to make sure that it's the right sort of example. So for example if I were to generate something that wasn't a valid login in some way, like I — let's talk about signup rather than log in — and say I generated something where I was signing up for a website and I filled in all the details on the form and it asks me to put in my password and then it asks me to confirm the password again. I have to make sure that the two values are the same in order for this to work. And the website has to be able to work if the two values aren't the same, in the sense that like it should correctly reject it.

But if my test is, sign up for the website, log in and do this thing, then it will fail at the point of signup if the two password fields are not identical. So this is what's called a validity criterion. A test case can be invalid in the sense that it doesn't do the things that are needed in order to test the thing that it's supposed to test. And often it's very easy to write things where you guarantee that you're generating a valid thing upfront. So for example, you could generate a valid sign up there by generating a password, and then reusing the same password for the next one.

But classically the way shrinking worked is that it took the test case that you had generated and it tried modifying it. So it tries cutting bits out of it. It says, okay what happens if we make the password shorter? And it does this without knowing anything about how you generated it. So what would happen is that shrinking would try making the first password smaller and not the confirmation password. And now that wouldn't work.

Depending on how you have the test set up, this can result in two different types of problem. One is just like shrinking may not be very good because it can't make the changes it needs to make. And the other one is that if your test is written in a particular way, like it might cause the test to fail. Like you might get an error that says the form was rejected and this might look like a failing test. And, now what happens is you found a real bug in the generation kit in time, but the shrinking then turned it into a fake bug. It wasn't a real bug and now you'll get annoyed at the testing system.

And so the thing that I invented is— it turns out that there was some prior art, but it was good that I wasn't aware of this prior art because it doesn't work very well and I would have copied it otherwise. But I think I had mentioned there was a particular way of making sure that shrinking works through the generation system.

You can think of the test case generation as providing a recipe for how to construct a valid test case and the shrinking in Hypothesis works through that generation system. So what will happen is it will try shrinking the first password, and then it will know that it's still supposed to just copy that over to the next one.

So as a result, people never have to really know how to write a shrinker, which was one of the problems that happens in classic QuickTrack. And also the test case produced will always be valid.

Cedric Chin: Interesting. So you said there were two particular ways, right? What was the second way?

David MacIver: Yeah, the second way is a way of generically remembering the test case for later. So what happens with Hypothesis is if you rerun a failing test, it will immediately remember the way it failed last time and it will just reuse that immediately.

Cedric Chin: Yeah, that makes sense.

David MacIver: Yeah, and it's very obvious that that's the thing that you want, but it's quite hard to do for a couple of reasons. But the main one is a test case could be anything and there's not necessarily a nice, simple format to write in a disk and also if the test changes in some way then it may be that the old test case is no longer valid for the new test. I don't have a very good example for that, but—

What It Was Like Exploring the Design Space for Hypothesis

Cedric Chin: It's all right.

What I'm trying to figure out is, to somebody listening to this who has never designed something from scratch or much less push something to the very edge of what has been done before — what was difficult about this process?

How hard was it for you to reach these two sort of major innovations?

David MacIver: The honest answer is that it wasn't that hard. Partly this was luck and that I had some good background for it, but a lot of it was that I was able to do this because not that many people had worked on this particular class problems. Like people have worked on property based testing, but shrinking has always been treated as a minor side note.

And the reason I was working on them is essentially because people were telling me that things were irritating about the library and I was going "you're right, that is irritating. Let me see if I can figure that out." So it was essentially research driven by primarily user experience goals. There was no sort of like grand research agenda. I was just listening to people telling me what was irritating and trying to help them with that.

I think the remembering saved examples thing, saved test cases, was because it was something that I wanted and so I did that because I've found it irritating to not have that. And that ended up being a surprisingly useful way into the idea that also provided fully generic shrinking. And what happened there was an initial, very bad implementation where basically what happened was that I wrote a bunch of custom shrinking logic, and also had a fallback system where the version that you would serialize on to disk was something that you could—cause you had to be able to serialize it onto disc and you had to be able to check the thing you'd serialized you got from the disk was valid. So that was a format that you could do test case reduction on.

Cedric Chin: Oh, wow. So it was like you reached that point and you realized that you could use that as well for…

David MacIver: Yup. And then the next-generation version which came in Hypothesis 3.0, I think. Yeah, it was meant to come in 2.0, but I couldn't make it as backwards compatible as I thought. So I ended up having to bump the minor version and again.

Sorry no, it wasn't meant to come in 2.0. 2.0 was a deprecation release to just remove a bunch of stuff and then it turned out to have to remove a little bit more to get the full idea working. So 2.0 was very rapidly followed by 3.0 which had a universal format that worked for everything. So there was now just like a single format that is both the on disc format and what shrinking operates on.

Cedric Chin: Right. Two interrelated questions here. I think we've talked about this before, like how software design it's very often a step-by-step unfolding, right? Which was what happened here as well right?

You build something to scratch your own itch and then you modified it and then you let your taste plus feedback guide the evolution of the software. But because this is a testing library, the first question is, what were you testing it on? Like were you writing random test cases or like random websites to test it on or what?

David MacIver: That's a good question. So for the initial version, I would have just been running it on toy examples basically. There's this small list of examples that everyone uses most of which I don't think are very good, like your classic property-based test example, which I hate is, if you reverse a list twice, you get the same list back. But things of that ilk. So I think initially I had a bunch of toy examples.

Somewhere leading up to Hypothesis 1.0, which would have been in early 2015, I realized that it was sort of stupid to have a badly tested testing library. So I decided to go for a 100% test coverage and just wrote a whole bunch of manual tests for it.

Hypothesis is tested with itself in places, but I think that's — it's not always a gimmick, but it's mostly a gimmick. There are a few things where Hypothesis genuinely caught bugs in itself, but for the most part Hypothesis is tested, but with itself because we can.

When David Knew He Was On To Something With Hypothesis

Cedric Chin: The second interrelated question is, at what point do you know that you were onto something?

David MacIver: That's a good question. I think the point at which I did the insights that led to the 3.0 release, the universal representation, that was the point where I was just like, "oh, this is actually genuinely novel in important ways." And so that's when I knew I was onto something from the research side. From the point of view whether I knew I was onto something from the usage side, honestly, it's been a very gradual thing.

I tried to push the usage a bit too hard, a bit too early, I think. And so I was trying to build a business around Hypothesis in sort of 2015, 2016. And I didn't do a very good job of that. So I think at the, at the time, I wasn't convinced I was onto something simply because I figured I would be having an easier time building a business if I were. But I think, it's been a slow build and like gotten more and more traction as people have sort of heard about it. And I think, at this point we've hit pretty good usage. I don't remember the statistics. I should have looked this up, but I think like jet brands does this survey of Python developers and a fairly gratifyingly large percentage of them had "we're using Hypothesis at this point."

I don't think we're in the majority of Python users use Hypothesis or anything. I think it was probably more in the low double digits range, but…

Cedric Chin: I think many companies use Hypothesis if I'm not mistaken. So I had a friend who he's working in Google — he ran a — they have universal search and he found Hypothesis inside the Google codebase as well. So yeah.

David MacIver: Yeah, I think it's true that there's at least — like any big user of Python probably has Hypothesis somewhere in their codebase. Like it's probably not like the major way they do testing, but I think like anyone who is sort of very into software testing and Python either has used Hypothesis, uses Hypothesis, or Hypothesis is on there as "oh yeah, I keep meaning to use that, but I haven't been able to figure out how yet".

I think it's actually very easy to figure out how, but people think it's scarier than it is. And I think that's genuinely one of the big problems with adoption is that because Hypothesis has this Haskell QuickCheck association, everyone assumes that it's likea scary thing for people who are super into types and formal verification when actually it's just software testing.

Cedric Chin: Yeah. I think to speak to that though, like how broadly Hypothesis has spread, I remember your 1.0 release because at the time I was mostly writing in Python and I think it was 2015 if I'm not mistaken.

David MacIver: Yeah, 1.0 was 2015.

Cedric Chin: Yeah, so I was newly in Vietnam and I still subscribe to this Python mailing list where they sort of talk about the cool stuff, projects and Python and Hypothesis was one of the big sort of link or like the big project in that particular newsletter.

How David went from Hypothesis to Coaching

Cedric Chin: So that's how I sort of stumbled upon that. I guess this is a nice segue to talk about — we'll circle back to the business in a bit — but I just want to get a sense of like, maybe people should hear your story of how you got here, now that we've sort of covered Hypothesis, which is one very interesting part of your life, but then now you're doing something very different.

David MacIver: Yeah. After Hypothesis, or after I failed to build a business around Hypothesis, I ended up doing a PhD around it cause I was just: haha, my serious joke about this is that I was really enjoying the research and I was really struggling with the business. And there's a place for people who enjoy research and struggle with the business.

That didn't go so well and I've quit the PhD now, but at some point in the middle of the PhD, I sort of looked at my life and I was like, you know what, I'm not very happy. That seems like something I should do something about. And so, I did what any self-respecting nerd would do when they want to understand their feelings, I read a whole bunch of books.

Some of them were very much the self-help, don't take it too seriously end, and some of them were more at the serious therapy end. Some of them were just a whole bunch of random life stuff. So I started working on trying to understand my own emotions, trying to understand life, the universe and everything.

At some point I think — that must've been like very late 2018 — at some time in early 2020, I was just like, "okay, I've got all of these ideas in my head and I've written about a tiny fraction of them. I should clearly fix that." So I just started writing a huge amount on my notebook blog, which is just a place to put poorly formed thoughts and people found it really useful.

So I started a newsletter. And then people found that quite useful apparently. I'm not making a huge amount of money off the newsletter, but I got paid subscriptions and people like it enough to pay me for my writing, which is honestly a weird feeling. And then back in September, I put out a small tweet basically saying, "so I've got all these thoughts and useful skills, but would anyone like me to coach them?" I can probably be a pretty good coach and it turned out that quite a lot of people wanted me to coach them, which again, slightly surprising, but that's been going pretty well.

Earlier this year I was having a sort of heart searching moment where I was just — my PhD and my coaching, or coaching and writing, were sort of getting into getting in the way of each other. And it's very clear that there's one that I want to do and has a large positive impact on the world. And there's one that I don't want to do and doesn't have much of a positive impact on the world and where I'm essentially having to fight tooth and nail to— I mean, that's not entirely fair, I only actually got one paper published in the course of my PhD, but it was quite painful to get that published. And it made me dread the idea of trying to get another one and done. And in the meantime, there was all of this writing that was getting really positive feedback and it was clearly helping people, and it just seemed much more sensible for me to focus on that.

So I did. And now I'm sort of trying to tie all of these themes together because one thing that I've noticed that's interesting is that all of my clients are software developers. Some of this is because I know a lot of software developers and some of this is because software developers have money to pay for coaching.

But I think a lot of this is also that I've ended up with the slightly weird niche of explaining soft skills to software developers because I have the right mindset for them. And I had sort of spent a lot of time learning to communicate well. And so when I take all of these ideas from therapy and self-help and philosophy and feminist literature and political theory or whatever, and sort of try and go, okay, but how do I actually use those?

The result ends up being something that is quite palatable to software developers, because it's explained in relatively clear practical prose. So I'm currently trying to turn this into more of a business focus thing where I go talk to software companies and software development teams and try and use the same skill sets at the team level, but that's sort of early days.

Emotional Reactions as Legacy Code

Cedric Chin: I think you answered my follow up question, which was to talk about "what were the things that you were writing about that was so attractive to the people who are reading it?" But perhaps to colour that in a bit more, could you give more examples? Like some concrete examples of these things that really help the people who are reading them.

David MacIver: Yeah. So one of the things that I think has— it's both a good example of what I'm doing is explaining feelings to software developers is that I have this metaphor of emotional reactions as legacy code. Which is actually very solidly grounded in actual therapeutic theory, but is definitely like not how a therapist will explain it to you.

And so the idea is this, an emotional reaction is— you're having an emotional reaction to do something. This is from a psychologist called Adler who is one of the original big three. It's like the fifth Beatle of psychologists though cause people don't talk about him that much.

You've got Jung, you've got Freud, you've got Adler, and there are others as well who I'm sure are important too. Anyway, so Adler's point is that when you get angry at something, that you get angry in order to behave in an angry manner. Like the anger is driving you towards a particular course of action and is enabling you to do so.

You can think of your emotional reactions as a bunch of learned responses like this. You are angry because at some point you have learned that it is useful to be angry here. And this works with other emotions as well. Some of them are less clearly driving you to a particular action and are more of a evaluative or judgment call. Like you can feel happy and that doesn't necessarily mean you're driven to act in a happy manner. That's more of an evaluative "this is good I like this, I want to do more of it." But either way there is a learned reaction from past experiences that is producing a particular emotion.

And the interesting thing about these is that those emotional reactions aren't always right. And this can be because they were learned in a completely different environment. Like one of the things that I encounter a lot, both in myself and others is, they were learned in a school environment or a home environment where you had very little agency. So one of the things that I think a lot of adults struggle with is remembering, "I am an adult, I'm not a child. I don't have to react to this in child ways." I can actually leave the situation. I can stand up for myself. I can solve this problem. You have much more agency as an adult but it doesn't necessarily feel that way when you're looking at an emotional reaction.

So there's something called coherence therapy, which is essentially about debugging this legacy code. It's about saying, "okay, where did I learn this?" And to be honest, it doesn't have to be a 100% true if figure out where you've learned it, it just has to be true enough.

Like, why might it have been a reasonable reaction to that response? And what are the relevant differences from the current situation and the situation which I learned it from. And then you can essentially — while the emotion is actually active, by pointing out the differences that your pattern matching system should have picked up on. You can go, "oh, okay, I guess I don't need to feel that way." And sometimes that works. It's tricky and imperfect, but it does seem to genuinely help people improve their emotional responses a lot of the time. It's certainly worked for me.

Why David's Approach to Self Improvement Works for Programmers

Cedric Chin: I'm resonating a lot here because I am a software engineer and I used to be a software engineering manager and dealing with your subordinates problems is part and parcel of the job.

Do you think there's something about being a software engineer or like, what is it about — this is a badly formed question — but like, what is it about your approach that helps software engineers?

David MacIver: I think there're a couple of things. One of them is that software engineers are honestly to prone to dismiss stuff that seems a bit fluffy or a bit too much like — one of the things that is probably one of my more useful services is that I read texts from the humanities and then I translate them into terms that software developers won't immediately reject.

Cedric Chin: Example?

David MacIver: I think the emotional stuff just now is an example.

Another one was — I was going to say I did a keynote for PyCon UK a few years ago in which I talked about James C. Scott's book, Seeing Like a State, but software developers mostly like Seeing Like a State I think. Or at least they like the blog post about it. They don't like the book because none of them have actually read the book.

Cedric Chin: That is so true.

David MacIver: Yeah, which I sort of think is fair enough. But let's think.

So another example which I know we've talked about a little bit in the past is, I have a blog post about false negatives and false positives in interviewing. And a lot of my thoughts on that subject have been informed by — I guess that one wasn't so much — but it sort of comes from a lot of the feminist epistemology literature, which is both feminism and philosophy and thus software developers are not particularly likely to read academic texts on that. But they do largely point to relatively concrete problems that people actually face. And so once you can explain them to people without using the word hermeneutical, they're much more likely to take you seriously. Because of course, as we know, software developers don't like weird jargon ever.

Ethical Problems with Optimising False Positives in Hiring

Cedric Chin: That was a good joke. You refer to the idea, but I thought it would be useful to just talk about the idea of the whole — I think it was like two blog posts that you talked about hiring and false positives, false negatives, which are terms that in the blog post, you say you don't like to use. So could you explain that?

David MacIver: Yeah, okay. So when you're hiring someone, there are two things that can go wrong, more or less. You can hire someone you shouldn't have, or you can not hire someone who you should have hired. And in the literature of false positive, false negative terms.

A false positive is hiring someone who you shouldn't have hired. A false negative is not hiring someone who should have. But I'll call them bad hires and bad rejects, I guess. And the thing is that you have— bad hires, cause you a real problem. Like if you've hired someone that you shouldn't have, then best case scenario, you fire them. Which takes a while, wastes a lot of your time, and you're paying the money and they're causing you problems while they're there. Worst case scenario is that you don't fire them and that they keep on forever at your company costing you money and causing problems. And this is a highly visible problem that causes you a lot of issues.

On the other hand, if you have a bad reject, someone who you don't hire who would have been great, you're never going to notice this. You're never going to— you'll occasionally notice this. But most of the time, what will happen is that you will say "sorry, we're not gonna hire you" and then they will leave and you'll never hear from them again.

And so as a result, you have a very strong incentive to reduce your bad hire rate. And the only thing that gives you an incentive to reduce your bad reject rate is that hiring is expensive. Like if you reject too many people who you shouldn't have, you might end up spending like an order of magnitude more on hiring than you otherwise would have.

But this is the sort of cost that is hard to understand. Like you'll just think of it as like, "oh, well, fact of life, hiring is expensive." And the additional problem that I talk about in one of these blog posts is an ethical one. Which is that if you look for traits that are good signs that someone is a good fit, but whose absence is not a good sign, that someone's a bad fit. Then this will look very good on the minimize bad hire metric, but look very bad on the minimize bad reject metric.

So you got your classical example of this is looking for open-source contributions on CVs. If someone has done great open-source contributions, then it is probably legitimately true that they are a great hire. It may not be, like they may turn out to be a complete asshole, but like on technical skills they are probably genuinely quite good. But there's all sorts of like problems with this in terms of disproportionate rates of participation in open source. Like I think open-source participation is far more male than even background for software development.

I don't necessarily think that in itself is a problem because I also think open-source is horribly exploitative, but it leads to problems when we tie a status to hiring to open-source contributions.

And another classic example of this is only hiring people from top universities. Like many great people go to less top universities and get ruled out of hiring when they shouldn't have been.

And so this is the problem I was talking about in this post. They're all of these invisible ways that add up on the false negative rate and are kind of unethical unintentionally. And we should pay more attention to. And also from business point of view are just stupid because — or not necessarily stupid — but like we're discounting the cost of them in ways that we shouldn't be by just sort of shrugging our shoulders and saying the hiring is expensive.

Cedric Chin: I think what struck me as remarkable, about that post is that you pointed out that this is just a natural side effect of the problem, right? Like, nobody's being evil. They're just sort of responding to the incentives that are in front of them, which is that hiring the wrong person is really painful because you have to fire them. You have to deal with the fallout. However long they stay with you, there are problems with that. And so as a result — and I feel this keenly and as I was reading your posts I was sort of reflecting on my experiences because they evolved in exactly the way that you pointed out. That it would lead me and any other hiring, ostensibly fair or somebody who wants to be good at hiring, just down the wrong path where we completely reject candidates that otherwise… It's sort of like a subtle thing, where you were trying to point out, right? It's not that we were explicitly rejecting them. It's that they self-select out of the pool or whatever. And therefore, like we have even more of a diversity problem in tech.

David MacIver: Yeah, absolutely. And yeah, I very much agree that this problem is not caused by anyone being actively evil. It's an inevitable unthinking following of the incentives. Once you're aware of the problem, you can sort of start to sink in a bit of extra effort upfront and you will get access to all sorts of candidates who you would previously have excluded and rebalanced to basically have a lower false negative rate — sorry now I'm doing it — a lower bad reject rate while keeping the bad hire rate more or less the same. It requires knowledge and a bit of upfront investment of effort. And the reality is that like, probably it is still going to be the case that your bad hire rate is always going to be more important than your bad reject rate. But I think putting in upfront efforts to try and lower the latter is worth doing very much from an ethical point of view and arguably from a business point of view as well.

Ways that Programmers Harm Themselves in their Careers

Cedric Chin: To sort of loop this back to your work with programmers, what other ways do programmers harm themselves without knowing it in their careers?

David MacIver: Gosh, that's a good question. I might need to think about that for a moment.

The biggest one is they expect too much to work like programming. Programming is very much a, this isn't a 100% true, but it's 90% true. Programming is very much deterministic discipline where you tell the computer what to do, and it does it. And you are expressing things in a precise language and the computer is following your instructions precisely. And the data you're working with is already turned into a digital format that you can work with.

Unless you're a sort of working at the cutting edge of human computer interaction, you're largely taking the way that a human interacts with computer is a given and only working with that digital data. The real world is much messier than that. And often there isn't a single right answer. Often things behave in weird ways. Often things aren't going to be a 100% right and you need to work with them anyway. And so there's this treating the real world as both having and needing more precision than it actually does.

I think you see this a lot in the "Hacker News commentator has discovered field and immediately solved it in five minutes" style of comment. And that's sort of the extreme failure mode, but you also see this in the way that programmers argue. There is very much demand for precision in cases that precision is probably not really what's needed here.

Cedric Chin: Could you give a concrete example of something that limits a programmer? Maybe somebody you know. Maybe it's yourself, in their careers as a result of their being a programmer.

David MacIver: I mean, as a result of their being a programmer is not necessary. The programmer mindset… Now I'm doing the unnecessary precision thing.

Cedric Chin: I'm used to it. I was a software engineering manager. So this is like bread and butter.

David MacIver: I should have a good answer to this off the top of my head, but I'm not sure that I do.

Cedric Chin: I'll give you my favourite — well, one of my favourite examples, which is that, I have this term, "fly fucking" which is from some European language, but it was a European hacker who taught it to me. And he said, it's a very useful as a concept to bring up in conversations with software engineers when you're debating about something. Because the argument will usually be, somebody makes an argument and then they give an example to support the argument. And the example is slightly wrong in some way and then programmers pounce on it and then argue over that completely irrelevant example for like 30 minutes, wasting everyone's time. The right response is you're fly fucking let's stop, let's move back to the main point.

And the term "fly fucking" is like the fly is so small, you still want to fuck it. So that's my favourite example. When this kerfuffle happens and there's a non-technical person in the room, I turn to them and say, "give me a moment, I'll explain to you what just happened." And then I go and I turn to the programmer and say, "okay, you're fly fucking. Stop and let's go back to the main point."

David MacIver: Yeah so that is a good example. And I guess another example in the "programmers having arguments space" is what I think of as the Vim versus Emacs debate. Or the static typing versus dynamic typing debate. There are two levels of this. One is just the people behaving so convinced that there's a right answer. And the reality is that there probably isn't a right answer. And there certainly isn't a right answer you can logically argue your way into.

And the next level of this is "okay, so why don't we have empirical data about this," where programmers don't understand empiricism. Because they think empiricism looks like physics and the reality is that if you try to do a physics style experiment on data, like VIM and Emacs or static typing versus dynamic typing, it's almost impossible. So what you end up with is these really weak experiments, which gather a hundred students in a room and try a short task on them and see how it goes. And conclude that maybe there's a little bit of difference, kind of, possibly, sort of. And they will immediately pounce on all the methodological flaws with these, not a 100% inaccurately.

It is true that these are probably — they're on inexperienced developers because experienced developers are super expensive. There's a relatively small sample size, yes. Because even inexperienced developers are super expensive to run experiments on, because they are people. And it's over a short period of time, yes, because running it over a long period of time costs proportionately more. And these are all of the problems that you have when trying to do quantitative research on humans.

And the reality is that you just have to bite the bullet and do qualitative research, do like ethnographic studies of what developers are actually do and they will probably never tell you that Vim or Emacs is better, because what they'll actually tell you is that both Vim and Emacs are awful.

Cedric Chin: What a way to end that anecdote.

What Non-Technical People Get Wrong When Dealing with Programmers

Cedric Chin: So here's an interesting follow up question to that. Say we've flipped that. You have spent a lot of time helping software engineers where they try to apply the programming mindset to too many things and it limits them, whether it's with their emotional lives, or maybe it's with their careers because communication and whatnot.

But if I could flip that on its head and say you're talking to somebody who's non-technical who for whatever reason has to deal with programmers, maybe they're a product manager or maybe they are a CEO in a firm that requires a large technical component. What will you tell them to help them work with programmers better?

David MacIver: Oh gosh. That's a good question. So far most of my clients at the executive level have been ex-programmers so I have neatly sidestepped this problem by being the opposite. I think the thing that I would probably tell them is to think of programmers as very systematic people. At their best, they are. This isn't always true, of course.

Actually, that's a good question. But I don't have a good answer.

Cedric Chin: I just realized that there's an easier way of asking this. What would a novice get wrong?

David MacIver: When talking to programmers?

Cedric Chin: Yes.

David MacIver: I think the first thing that a novice would get wrong when talking to programmers is that they would assume that the programmers are much angrier with them than they actually are. Because there's a strong argument culture in programming.

Like the Vim versus Emacs thing was an example of this. But those are the ones I think are the programmers that are actually angry a lot of the time.

Cedric Chin: Yeah, that's really bizarre now that you've pointed it out, I've never actually… go on.

David MacIver: Yeah. So the thing about programming is that for technical decisions, there is often a right answer. Not necessarily should we do it this way or that? But like, why is this bug happening? What's going wrong here? Like, what is the actual outcome of running this programming?

So programmers live in a world where most of the time there is an objective truth and people are trying to come to the objective truth. I think a mathematician is the more extreme example of this, but programmers are about 60% of a mathematician in this regard. If a programmer is arguing with you, it's probably because they want to understand and they want to get the right answer. Not because they think you're a bad person for having said the thing you did.

Cedric Chin: What's another thing that a novice would get wrong in this way?

David MacIver: I think the thing that, as a programmer, I've been most frustrated by novices getting wrong, is that often they will assume that they can understand more of what's going on in the technical decisions than they actually can. One of the things I sort of have as a general principle is that people understand problems much better than they understand solutions to those problems. And particularly I found when working with — usually I found this with people who were in charge of the company, like CEO types — is that they will have some very clever idea that absolutely will not work. And it will take three hours to explain to them why they won't work and they still won't get it at the end of the three hours.

There are two parts to this problem. One of them is that they assume that they can understand it. And the other is that the programmer really wants to explain it to them.

Cedric Chin: So true! Oh my gosh.

David MacIver: The first three times, at least. Afterwards it gets old.

Cedric Chin: Okay. So basically if you are a non-technical person and you're dealing with programmers, well, this is a good advice. Like you should not try to spend your time that much in the solution space is what you're saying. And instead spend all your time explaining the problem to the programmer.

David MacIver: Yup.

Cedric Chin: Oh, that's really good advice.

Applying Lessons Learnt from Building Hypothesis to Coaching

Cedric Chin: I'm going to segue a bit and backtrack a bit and ask you, was there anything that you learned from your Hypothesis experience that you brought into your coaching and your coaching experience that you're doing now?

David MacIver: Yeah, so there are a couple of things. Like I find myself drawing on technical analogies from Hypothesis surprisingly often and find them useful surprisingly often. Which I think is a result of like, my brain is very good at drawing connections between things that work pretty well, but are not necessarily like the thing that you would come to if you had a sensible background.

So an example of this is I have a post on my notebook, life has an anytime algorithm. An anytime algorithm is an algorithm that is trying to do something. It's trying to get the best possible thing. But also at any point you can stop it and say, "hey, give me your best answer now and just stop working. I don't care anymore."

So for example, shrinking, which I talked about in Hypothesis earlier is your classic example of an anytime algorithm where it's got some test case, and it's trying to make the test case smaller and simpler, but it's doing this to save you time. And if it has been doing this for an hour, you're probably bored now. And you want to say, "look, just stop. Give me what you've got. I really don't care about like the last two bytes you're shaving off the answer." Usually you want to stop it long before an hour. So this is an anytime algorithm where you're designing to get the best possible result eventually, but you're also designing it so that it makes useful progress along the way. At any given point, you can just cut it off and say, "you're done now."

So I've often found it's very useful to structure life projects as an anytime algorithm where you are trying to achieve some grand goal, but also you acknowledge that you might just get bored or you might sort of decide it's no longer worth the cost at some point. And you want to make sure that when you do that, you're walking away with something you're proud of.

Cedric Chin: That is good. That is really good, wow.

Rigour in Self Improvement Writing

Cedric Chin: I was going to say one of the highlights, one of the things that really attracted me to your writing is you have a very rigorous — I guess, rigorous is the right word for it — approach to self-improvement and life skills and learning.

To start off this section of your work, could you talk a bit about how or why you think you're effective at those set of topics?

David MacIver: Yeah. So I guess one thing I would say is, rigorous approach, but haphazardly applied.

Cedric Chin: Go on. What do you mean by that?

David MacIver: What I mean is that… Essentially the rigor is often added after the fact. I work on whatever feels important. And I tend to sort of jump from topic to topic based on what I currently have the highest cost-benefit analysis itself. There I'm doing it again. But I think one thing that happens a lot if you look at my writing from the outside is, it seems to jump from topic to topic. This week I'm talking about — I can't even remember what the last newsletter post was about.

Cedric Chin: Chopsticks.

David MacIver: Chopsticks. Yes, exactly, yes. So that was the post about me teaching you to use chopsticks. How's that going by the way?

Cedric Chin: I haven't bought it yet.

David MacIver: Okay, fair enough.

Cedric Chin: So for listeners who are listening to this, the post about chopsticks is about learning. And I don't know how to use chopsticks properly. And David suggested I buy cooking chopsticks, which is large, and that's a lateral move into learning the thing, because you're changing the context. But go on.

David MacIver: Yeah, so this is a perfect example of the writing thing where, we had a conversation and it resulted in me writing something. And as a result, the lateral move approach is more at the forefront of my mind. So I'll probably be on that trend for a little while. Or I might harper and base something off of the latest book I'm reading.

So it's very sort of scattershot, do all the things. But as part of this anytime algorithm approach. Like what happens is that when I figure things out, I write about it. And I think the practice of writing about things as I've figured them out is very much what drives this analytical rigorous approach. Partly because I am coming from a background where I'm used to explaining technical concepts clearly. So this is just how I explain things, but I think by its nature it produces this sort of scientific, really nice usable tools, to have the constraint, to being able to explain what I'm on about in a single, relatively self-contained essay.

Cedric Chin: Could you give listeners a sample of that? Like one example of the many posts that you've written in this self-improvement area.

David MacIver: So I think the foundational one which I wrote, which was probably was back in 2019 before, I really started the self-improvement writing gig is called How to do hard things. There's an interesting thing that happens when you write too late once you've already internalized the idea. Like while I was writing it, it felt very boring, like I was just explaining something obvious I was like, "oh God, people don't need to hear this. They know this, right?" That was not the case that one landed at the top of Hacker News for a while and was my second most popular piece of writing — the first one being a piece of Stargate fan fiction. Are you not familiar with my work in Stargate fan fiction?

Cedric Chin: Absolutely not familiar. I buy chopsticks, yes. Go on.

David MacIver: So the Stargate fanfiction of course being my other great contribution to the world of software testing.

Cedric Chin: So go on about How to do hard things.

David MacIver: Yeah. So How to do hard things is essentially when there is something that is too hard for you to do, find something like it and — it's hard enough that just trying it doesn't get you any traction — you find something that is easy for you to do and you make it hard in exactly one way that is like the hard thing.

So if you are struggling with writing a book, write a short story. If you are struggling to juggle, try just tossing a ball up and down on one hand. That's not strictly an example, but the point is, find something that is easy for you and try and draw a path in the direction of the hard thing.

And I think this is part of why I have this haphazard approach in some ways. I am always finding something easy and sort of taking the next step from there. And often that is in the direction of some grander goal, but from the ground that isn't necessarily obvious.

Cedric Chin: This is really interesting. This approach, did it inform your approach of Hypothesis because I'm sort of seeing parallels here, you know?

David MacIver: I think that was written after I had done most of my major work on Hypothesis.

Cedric Chin: But was it something…

David MacIver: Hypothesis was much more done in a "solve the problem in front of you" sort of mindset, where I had this thing that existed and it was working. So it was a similar incremental approach, but there wasn't some overarching goal I was reaching towards particularly. It was much more, "I have a practical problem that I want to solve" and Hypothesis in its current form is sort of infrastructure for solving that problem.

Cedric Chin: Right. So this How to do hard things like from which part of your life did it come from?

David MacIver: To some degree all of my life. But I guess the two main things that I practiced it most on are software development and writing. I've been continuously improving at both for most of the last, let's say 15 years. It's closer to 20 years, but a lot of that has been not necessarily at all times with a single overarching goal, but it has been very much this "take an easy thing and make it harder approach" and like continually pushing the boundaries of the comfort zone to try to improve at the skill.

Explaining Computers to Non Technical People

Cedric Chin: So one thing that you've mentioned in the past is that you are pretty good at explaining computers and it's ties into everything we've talked about before. And I think you're writing a book about that, right?

David MacIver: That's the plan. It has been slightly shelved this month because this month was the point where I was going to be focusing on it. And then I got a bunch of potential clients for the team training stuff. And so I've been a bit more focused on that, but it's the thing that has come up in my writing a bunch of times and the starting point for the book is to assemble a bunch of essays I've already written about it and try and fill in the gaps and expand upon them.

Cedric Chin: Could you say more about what's the pitch and what are some of the approaches inside the book.

David MacIver: So the basic pitch is that you need to — so I wrote How to explain anything to everybody, a blog post a while ago, where essentially the core concept of it is that in order to explain something, you need to put it in terms that they already know.

This is the single biggest thing that programmers fail at when trying to explain technical concepts, is that they assume people know things that they don't know. So you heard some of this earlier when I was explaining Hypothesis to non-developers, I started by going, "okay, so first let me tell you about software testing." And that is an example of something that I have learned slightly the hard way, (a) that you need to first check in with people to see if they understand before finding Hypothesis.

And so the How to explain computers is about providing people with the layers of scaffolding they need to understand and understand something. And this can be either to give someone a complete understanding, like teaching someone to go from novice to developer, or it can be to explaining the problem to someone, or it can be a giving people a sort of like a sense of a thing.

And it is also to a large degree about communication skills. Here, I'm going to put you on the spot. Suppose I come to you and as a non-technical person, like obviously I know nothing about computers and say, Cedric, what's a web browser?

Cedric Chin: Oh dear. Web browser is Chrome? It's a thing that you use to view webpages.

David MacIver: Okay, yeah.

Cedric Chin: I'm sort of making a face here if you're listening on a podcast because it's really hard. As a programmer, like all these concepts just exploded in my brain. And to tell a funny story about this, there was once a workshop that I did at university where my friend and I, we had a bet going, like we would not mention — and the workshop was to teach the basics of HTML and CSS, which are these sort of basic building blocks on making websites — and we had a bet going, like whoever said the word DOM once would have to pay the other person.

Thankfully I didn't fail, my friend failed. And for non-programmers listening to this, the DOM is the representation — oh, this is terrible, you're really putting me on the spot here — it is the thing that the browser constructs in order to show you the web page. It's totally unnecessary. You don't need to know anything about it, but yeah.

David MacIver: Yeah. So this is a question that I posed when I was giving some lectures in academic writing a while ago. And I don't remember what I said at the time. I did come up with what I think was a reasonable explanation of a web browser to non-technical people. And then I pointed out that this was the wrong answer because the right answer is, why do you want to know?

Cedric Chin: Oh my God, oh my God. That is so true. That is so good.

David MacIver: Because if you've launched into a technical explanation of HTML and CSS and so on and so forth — so one thing you did right, which was good by the way, was you gave a concrete example to start with, you said Chrome. Because like my prototype answer for this is like "this website is telling me that I need to use a supported web browser."

Cedric Chin: Ooh, that's a, wow. My answer would completely change if somebody was asking with that context in mind.

David MacIver: Yup.

Cedric Chin: Oh, wow. Wow, I can totally see now why yes, companies should hire you to teach programmers to communicate better with the rest of the company. Oh my goodness.

Do you have another example in this category? This is amazing.

David MacIver: Someone once asked me what a programming language was and…

Cedric Chin: Ah.

David MacIver: And here there's no trick to this. Like he was genuinely just curious. The context was that he was the son of someone who was at a Python conference with me. He was there essentially — we were in Namibia at the time — so he was there partly for the holiday and partly I think because it was an interesting learning experience for him.

Cedric Chin: How old was he?

David MacIver: It's a good question. He was late teens. So I think somewhere in the 16 to 18 region. I asked him how much detail he wanted and whether he had some practical question in mind here. And the answer was that he didn't have a practical question in mind here, and he wanted a lot of detail.

So I walked him essentially down the whole stack. I was just like, okay, so a computer is a machine that follows a series of instructions. And it's got various things like it's got RAM, which can store information. And it's got a CPU which executes these instructions. And by default, these instructions are just this incomprehensible stream of numbers that don't make— like you can't really read as a human. So what we want is to be able to ttake something that a human can read and translate it into these series of instructions. And I went into a lot more detail than that but that's the basic idea. And one of the other programmers was sitting in listening. And he was just like "it's interesting, I was waiting for you to use a big word that you hadn't explained, and you never did that."

The Nature of Mathematical Expertise

Cedric Chin: Incredible. For those of you who are listening, the really mind-blowing thing for me was, David explained to me what the mental model was for people who are good at math. And the context here is that I am horrible at math and I was never good at it. I struggled with it in school and like the main reason for that was in secondary school, in my very final year before the school leaving exams — I still remember the class where like the teacher was like, "oh, I forgot to teach you trigonometry! And there's no time left to teach you. So I guess you have to like, let's just pray that it doesn't come on the exam." And it didn't thankfully, and I got an A, but then forever ever since then—.

I think the way you put it very succinctly was that math is very unforgiving. If you lack a foundational pillar, you will suffer when you start going on to pre-university and then university. And that was exactly my experience. The way that you sort of explained to me, and I'll let you explain it in a second, is like people who are good at math, don't do what you think that they do.

Everybody thinks that good math people just do what they learned to do in secondary school, which is to regurgitate learned rules. And David, perhaps you should explain this because this was quite profound to me.

David MacIver: Yeah. So I think your typical experience with somebody who ends up being good at maths is that in about the first five minutes of the class, they go, "oh, I get it now." And then the teacher keeps speaking for the rest of the class and they zone out and think about something else. The problem with the model that you try to learn in school is that you memorize maths as this long list of facts and rules and rituals to do. And nobody ever actually explains to you what's going on.

I'm a big fan of like a paper by Lockhart called the Mathematician's Lament, which basically complains about the deplorable state of the mathematical education. And there's this great line in it about how everyone thinks that they understand the problem and the teachers say we need more funding. Like the government says we need more discipline. And the students say math class is stupid and boring and I hate it. And they're the only ones who are right.

Mathematics is a body of working knowledge. Like you learn mathematics in order to do things. You solve problems, you spot interesting patterns, you try to figure out those interesting patterns. And you fit everything together.

I can't remember the quadratic formula off the top of my head, but I could work it out. And the fact that I can work it out means that when I am using it on a regular basis, it will not be difficult for me to remember it because I don't get to a point where I'm using the quadratic formula and — or I need to use the quadratic formula and I can't, and then I'm stuck and then and as a result, I get no practice in the quadratic formula. Instead I go like, "oh, I think it has roughly this formula, I know that this is roughly how the proof is going, I'll just complete the square. You'll never have heard the complete the square in your maths class probably, but like it's very simple proof step which is how you work out the quadratic formula.

And so by knowing all of these intermediate steps and sort of treating mathematics as a working skill, which I'm actually using, it's by doing mathematics that I get better at mathematics. Rather than by doing mathematics I apply the things that I've already learned. And you barely ever do mathematics in this sense in school. Like you'd never really told how to prove things—you do some proof in geometry, but even the proof is badly explained and tends to be the class people hate the most.

And so there is a lot of mindset of how proof works and how knowledge fits together that no one ever sits you down and explains the bare facts to you. And this is I think it, especially by its people in mathematics, because mathematics is so abstracted from anything that people are actually doing that…

So this is the other thing that I think is my version of Lockhart's thing of maths class is stupid and boring. In class people ask, "when are we going to actually use this?" And teachers treat this as like an invalid question, but no, this is the correct question. If you do not find it interesting and you're not going to use it, you shouldn't be learning it, because you won't actually be able to remember it. And using it is how you remember mathematics.

Cedric Chin: Yes. I think why your explanation was so resonant was because so I did a computer science degree. You did math, right, in university?

David MacIver: Yep.

Cedric Chin: Yeah. So we had five classes that was math. And they were really good math people in the class. It was in the math department and it was taught by math lecturers, math professors.

And they did exactly what you described. I noticed this. They were like, "um, okay, I got it." And then they're bored. And they're off doing other things during class. So when you described that, I was like, ah, that must have been what was going on in their internal world and everything you said about like, they worked it out as a body of working knowledge mapped 100% to whenever I went to these people for advice, for help, right? They had this very intuitive sense of the problem. I could tell that there was something very deep that was going on. Some sort of cognitive process that I had no access to. And I had no idea what it was until you explained it. Now I wish I could have gone back in time and told my younger self, like this is what actually they're doing and this is how you get good at math.

How did you come up with this? Or how did you stumble onto this?

David MacIver: I don't remember my school maths classes enough that I can properly answer that, but it's okay. The honest answer is just that I'm quite smart. And I didn't have much better to do with my time, so I don't know.

Not to say that you're not smart of course, but it's very much like, from my perspective, the thing that was happening in maths class was that, they just weren't teaching me anything very hard.

So I essentially had a lot of time. I think there was an important thing with learning in general, which is that you need slack time in order to actually sit and integrate it. You can't just sit there and take in information at the rate it's being given to you. You actually need to sit and process it and so on. And so I think with mathematics in particular, or because of how much it builds on itself, but with everything really, if you are better than is needed to be for the material, then you will have this experience of sitting in class and essentially having that slack time built into the class, because you don't need to pay that much attention.

And this then compounds as an advantage because you have a solid foundation to build on for the next phase. So what you did in the previous step of like properly integrating the material that you're now building on means that you have an even bigger advantage in the next phase. So I think like it is a mix of intelligence and luck early on and having like a good decently mathematically shaped brain.

And I couldn't have explained any of this to you until probably like the last five years. I knew intuitively that this was what was going on, but until I spent a lot of time talking to people and reading literature on skill development and expertise and trying to fit together, like what it was that people were actually struggling with, I couldn't have explained my own experience in this way. But because I think at the time mathematics just felt kind of obvious, and I didn't really start struggling with mathematics until my third year of university, which it turns out is this is an unfortunate year to start struggling because they assume, you know how to do hard work for it at that point.

Cedric Chin: Yes.

David's Practice with Teams and Organisations

Cedric Chin: So I guess, we're nearing time right now. And I just wanted to shift the conversation to the last bit where we get to talk a bit about your current practice where you're trying to start doing what you do for individuals, with organizations. and teams. Could you talk a bit about what do you hope to accomplish there and what's going on right now?

David MacIver: What I hope to accomplish there is, I've worked a number of different software teams, and I know how to help software developers with their problem solving and soft skills quite well at an individual level. And it's just very clear to me that a lot of teams, even very good teams, they have some area that they're struggling with and having someone who can come in and essentially say, "hey, what's up? What are the problems you're facing? Shall we look at them and solve them together," is good— I think it's the thing that most teams would benefit from. And to be honest, I expect that the teams will do about 80% of the work because, possibly more than that, and my role coming in will be to essentially facilitate and give them the nudge in the right direction that is needed to make sure that what they're doing is the right work. Because it's partly because I've gotten good at that helping individuals. I've also been doing some more executive style coaching with a CTO and a senior software developer.

And so I'm reasonably confident at this point that the things that I had developed largely for solving life problems work pretty well for business. I mean, if anything, the CTO I coach is my easiest client because there are so many problems to talk about. But that's not entirely — like that makes it sound bad, I just mean that a business is a large complex entity, like if anything, his business seems unusually good and there's still a lot to talk about.

Cedric Chin: No, no, definitely. Like I resonate with that as well, because there's always a fire to put out in the business. And when you put out the biggest fire, that's like a second biggest fire.

David MacIver: Yeah, yeah, exactly. It's like the universal constant to businesses is that something is always going wrong and something could always be better. I think there's a thing that Gary Klein says at some point in one of his books about how working on team cognition is so much easier than working on individual cognition because all of the moving parts are visible to you. I can't look inside your brain and see what's happening, but I can talk to your team members and talk and ask them what they think is happening.

Cedric Chin: To make this more concrete. I think we've articulated one key value, I would have loved to hire you or someone like you, to talk to my programmers and get them better at talking to non-technical people in my company. That's one aspect of it. What other aspects are you good at helping?

Getting Better at Sprint Planning

David MacIver: So one concrete thing that I am hoping to help people with is sprint planning. I think that people are bad at understanding the dynamics of sprint planning. A simple example of this is that when you exceed your— like when you don't get all the cards done from a sprint, that's a problem, right? How big of a problem is it?

Cedric Chin: Depends right? On what else is going on.

David MacIver: Yeah, so I would argue that you should treat it as a much bigger problem than people traditionally treated as because, it partly ties into the thing I was saying, of slack in learning. Building slack into your systems is a good way of ensuring that everything works better.

People are learning better. People are doing better work. People are able to deal with unexpected contingencies. So when you exceed your sprint goals, that is both a sign that you have significantly overestimated how much work you can get done in a sprint. And it's also a sign that you are running with no slack.

So if you get to a point where essentially you are, to within rounding error always, unless like some genuinely once in a blue moon catastrophe occurs, completing your scheduled task for the sprints, then what you will probably find is that everything will work better. And over time, the amount and type of work you can get done will, without depleting your sprint goals, will go up. So the sprints will actually become more productive by doing that. As long as you are routinely exceeding your sprint goals, you will end up in what I refer to as the "too busy fighting fires to invest in firefighting infrastructure" problem. Like everything will be on fire all the time. And as a result, you will be significantly less productive than you would otherwise be. And also all of your team is stressed and management can't predict what is reliably going to happen.

Cedric Chin: Well, okay. That makes a ton of sense. I did not think about it like that. That's really cool.

All right. Thank you, David. We're at time. I enjoyed this a lot and I learned a lot from this.

David MacIver: Thank you very much, Cedric. It's been a pleasure. I enjoyed it a lot too.

Cedric Chin: I hope you enjoyed that episode of Commonplace Expertise. If you like it, I will be very grateful if you left a review on Apple Podcasts or like and subscribe if you're watching on YouTube.

David, if you want more of him, may be found on Twitter at DRMacIver and his newsletter, where you can read his writing on self-improvement or soft skills for hackers, programmers, maybe found at I've linked to both in the show notes if you are listening to this on your podcast app or below in the description, if you're watching this on YouTube.

Thank you for watching or listening, and I'll see you in the next one.