Ideas

The Difference Between Experience and Expertise

Feature image for The Difference Between Experience and Expertise

There’s a wonderful quote by psychology professor Gregory Northcraft that goes “There are a lot of areas where people who have experience think they’re experts, but the difference is that experts have predictive models, and people who have experience have models that aren’t necessarily predictive.”

Northcraft’s observation was taken from William Poundstone’s Priceless: The Myth of Fair Value (and How to Take Advantage of It), and was made in reference to Northcraft and Neale’s 1987 paper on the differences in property pricing decisions between expert and amateur real estate appraisers.

The more pithy version of the quote is from Michael Mauboussin, and it goes “the difference between experience and expertise is that expertise is having a predictive model that works.” This is the version that I remember, and therefore the version that I’ll use for the rest of this essay.

Let’s take a look at some interesting implications from Northcraft’s idea.

Testing for Predictive Models

One of the most obvious applications of Northcraft’s observation is that when you are searching for people with expertise, you should evaluate them based on the accuracy of their predictive models.

This is a trite observation. After all, you cannot have expertise without making good predictions.

For example: when managing managers, one metric that I look out for is my managers’s ability to predict departures of their staff — and to act before such departures occur. Nearly all the good managers I know can tell me, off the top of their heads, the people most likely to leave their team in the coming months. Conversely, managers who are constantly surprised by staff departures aren’t quite as good.

Having this predictive model in their heads turns out to be a good leading indicator for all sorts of positive managerial qualities. It means that they know their people, and understand what drives them. It also means that they have kept up with the events in their subordinates’s home/work lives, and are able to construct a plausible narrative about each subordinate’s flight risk.

This description of managerial quality is itself a predictive model. Is it a good predictive model? The answer is that no, it is not. Good managers are good with people, but great managers are a combination of a systems thinker and a people person. My evaluation for management ability, above, only tests for one half of the two skills required to be a great manager. It doesn’t tell me if the manager is able to think through incentives structures, or the subtle side-effects of implementing new policies in his or her team.

You may say that I’m demonstrating my expertise as I’m explaining my predictive model for good managers. But the true test of expertise is if this predictive model actually works — and I am not so sure of this model. My belief that great managers are a combination of a good systems thinker and a good people person is consistent with my experience, since I cannot think of counter-examples amongst the managers that I know. But you may have a larger sample set to compare with from your experience, and may therefore have a better model in your head as compared to mine.

My ability to acquire expertise is thus limited by my ability to update my model — especially when presented with good counter-examples from people who are better than me.

Northcraft and Hiring For Startups

Northcraft’s idea has a more interesting application, I think: it gives us a test for evaluating effectiveness.

A couple of years ago, my ex-boss churned through three heads of sales in a single year. All three were experienced people he recruited from larger organisations with hefty comp packages; they were hired to scale the sales team in our small company. Like clockwork, each candidate lasted no longer than two months in the business. (One of them spent an entire fortnight designing a dress code for the Singapore team, to engineering’s great horror).

Each time a candidate churned, I would ask: “what went wrong?” and my old boss would sigh and look sad and say “he’s wasn’t a good fit for a startup.” And then I would get it and share in my boss’s disappointment, and respond “Oh, well, that’s alright. We’ll just have to keep looking.”

I like Northcraft’s observation because it makes concrete this ‘startup-y’ quality that I’ve long struggled to express. My boss and I used the word as a shorthand for the kind of effectiveness needed to thrive in a startup. With Northcraft’s quote, I now have a better definition for this quality, which tells me I have the beginnings of a test that I may use in the future.

The quality that my boss and I were looking for — this elusive 'startup-y'ness — was the property of being told to, say, “go figure out marketing for us", and then be certain that in a year or so, that person would have not only figured out a workable marketing strategy, but would have also discovered several deep principles about successful marketing in our specific niche. They could then be told to hand off the marketing portfolio, and be redeployed to some other adjacent problem in the business.

My boss relied on me to figure out how to run our Vietnam operations: to understand the Vietnamese labour market, and to supply our company with the necessary engineering capability required to achieve our goals. I relied on him to figure out the business model and sales strategy … and a lot more besides. We understood each other when he said “we need a startup-y sales lead” — he meant that he wanted someone who could figure out the thorny problem of scaling the sales org in our market, so that he could focus on other things.

I won’t pretend that we could uniquely identify this quality. It’s been called many different things by many different people. Some friends call it ‘effectiveness’; others ask things like “is he (or she) a capable person”? Koch Industries calls it ‘Principled Entrepreneurship’ — though they mean something slightly different, given that they operate rather difficult businesses in tightly regulated markets.

In the Singapore civil service, this quality is often referred to as the ‘helicopter quality’. When I first heard of the term, my Singaporean friends explained it as ‘the quality of being able to be dropped off from a helicopter in a foreign land, figure it all out, and have it mostly sorted out when you return.’

Much later, I learnt that Singapore acquired this framework from Shell, and prized it because it was an articulation of founding Prime Minister Lee Kuan Yew’s approach to country building. In From Third World to First, he writes:

Running a government is not unlike conducting an orchestra. No prime minister can achieve much without an able team. While he himself need not be a great player, he has to know enough of the principal instruments from the violin to the cello to the French horn and the flute, or he would not know what he can expect from each of them. My style was to appoint the best man I had to be in charge of the most important ministry at that period, usually finance, except at independence when defense became urgent. That man was Goh Keng Swee. The next best would get the next most important portfolio. I would tell the minister what I wanted him to achieve, and leave him to get on with the task; it was management by objective. It worked best when the minister was resourceful and could innovate when faced with new, unexpected problems. My involvement in their ministries would be only on questions of policy.

And in a 1994 speech in parliament, LKY explained:

“I’ve spent 40 years trying to select men for big jobs—ministers, civil servants, statutory boards' chairmen. So I've gone through many systems, spoken to many CEOs, how did they select. Finally, I decided that Shell had the best system of them all, and the government switched from 40 attributes to three, which they called 'helicopter qualities,' which they have implemented and they are able to judge their executives worldwide and grade them for helicopter qualities. What are they? Powers of analysis; logical grasp of facts; concentration on the basic points, extracting the principles. You score high marks in mathematics, you've got it. But that's not enough. There are brilliant mathematicians but they make poor executives. They must have a sense of reality of what is possible. But if you are just realistic, you become pedestrian, plebeian, you will fail. Therefore you must be able to soar above the reality and say, 'This is also possible'—a sense of imagination.”

Northcraft’s observation gives us a simpler measurement, one that might be more useful at the level of a small business: when we are hiring, we should ask, does this person have a track record of developing predictive models in their past? Or to say this differently: “Tell me about a time when you were thrown into something new, felt completely lost, and then mastered the domain? What were your results? How did you do it? And how long did you take?”

If expertise is the property of having a predictive model that works, then effectiveness is the ability to build predictive models in new domains. This is different from experience in tangible ways.

What is the best leading indicator of this ability? Well, this is simple: it is a track record of having done so in their recent past.

I'm not sure if this criteria is complete. But it is what I intend to test. What Northbridge’s observation has given me is the language to reason about it.

More happily, however, I now have a concise answer for the difference between experience and expertise: expertise is having a predictive model that works. Experience does not.