This was written as a post in the private Commoncog members forum, as a response to a young person worried about AI making young knowledge workers obsolete.
The meta point of this response is that it is possible to act without prediction — something that might not be obvious if you have never attempted to achieve outcomes in a complex adaptive system. But the truth is that most good businesspeople do not attempt to predict the future; instead, they design their lives and their organisations for ‘fast adaptation under uncertainty’. This is an idea that we’ve covered before on Commoncog. This letter provides a concrete example of what that looks like, at least for our current AI moment.
Another implication is that you may ignore any essay that attempts to predict the future. And you may do so with peace of mind. Read on to see why.
I’ve been thinking about your question for a few days now, and I think I finally have a coherent response. In this past we’ve discussed how difficult it is to predict anything about the effects of AI adoption. I’m not sure if I’ve said this explicitly here, but I’ve said elsewhere that the lesson I took away from the unpredictability of the Covid years is that prediction is hard and that humans are not smart enough; I shouldn’t bother with it.
Of course that is easy to say, but hard to do. A more compelling way to put this is that “you should learn to act without the need for narrow, accurate prediction”. Which is why I feel more comfortable about the uncertainty of the current AI boom.
When I was preparing for this reply, a part of me thought about addressing your concerns directly. You linked to a research paper by OpenAI on why LLMs hallucinate, with the implication that perhaps they would be able to cut down on such hallucinations. I sent the paper to an AI researcher friend and he laughed at it. (Technical argument: no matter what you do, you will never be able to solve aleatoric uncertainty in natural language). But I don’t think attacking the paper is a good idea. In a way, the ‘LLMs that hallucinate less’ result isn’t the real thing you’re asking. Say I get a bunch of AI researchers to debunk this and you are convinced. Tomorrow there may be yet another result, about a different AI capability, that implies “everyone will lose their jobs and starve” and you will extrapolate from that result the way you are extrapolating from this result.
No, the real problem is this:
- You are afraid you cannot get a job because AI.
- You believe that in order to prevent (1), you must be able to predict the future accurately, never mind that the smartest people of every generation have atrociously bad track records of making predictions.
- (Maybe you think this time is different, and folks today are smarter. Maybe you think you just need to try harder and you will do better than prior generations.)
- So you spend a lot of cycles trying to predict and model changes in the world better.
In truth, embedded within your worldview is a particular model of technological change. It goes something like this:
- A new technology arrives, and it changes society in breakneck time.
- If you do not act before it changes society, you will be too late.
- Because of the speed of change, it is not possible to watch what’s happening right now and then act in response to it; you must predict accurately, before it happens, so that you don’t get caught out by the change.
Here is an alternative approach.
- You should read actual historical examples of technological change resulting in career displacement. Aim for at least 10. Find some that are breakneck fast, and some that take longer. Have a bias towards books, with oral histories or actual interviews with the workers being displaced by the new technology. It is ok to skim and jump to specific chapters about such displacement in those books, so you can finish this research project quicker. Examine across industries. (ChatGPT can help you, but I’ll also give some suggestions later). Right now the model of technological change in your head is made up; it is not calibrated against real world examples of technological change.
- Here is what you will find. (Again, my telling you this does NOT remove the need to go and do this research for yourself — you can only calibrate properly if you fill your head with the stories and experiences of others. Otherwise you will find that your brain will discount what I am telling you.) First, in the vast, vast majority of cases, it takes a long time for technological change to destroy jobs. Second, most of the folks making the most noise in a technological bubble either i) do not read history, ii) need to make it seem like their technology is world changing and the change is closer than anyone can ever imagine (because otherwise investors would not be willing to hand over their cash) … or iii) blindly believe the folks who are talking their book and doing (ii) because — haha — they themselves do not read history and believe that the folks closest to a new technology are best-equipped to predict the future.
- Once you are properly calibrated, you will have a better sense of how long these changes will take. I suspect that in your head, you currently think it will happen in a matter of … months? Or perhaps one to two years? Whichever way it is, you’ll think that this will happen fast enough that you will be caught wrong-footed.
- Or you are finding it difficult to get a job right now, and you think AI might be to blame or it might make things worse. This is a slightly different topic — I currently believe that a) it is not AI, and b) you should not make predictions! But also c) there are so many other, more important factors that determine if YOU — a single person in a specific job market — will get a job that it is not productive to talk about it here. We may talk about that separately if you’d like — start a forum thread if that’s something you want to do.
- Now: i) technological change takes longer than expected to destroy jobs, and ii) it is difficult to make predictions, especially about the future. If you are convinced by these two points, then it is actually a more prudent strategy to wait, watch, and then act in response to real world, observable changes instead of speculation. (Aka “good businesspeople do not predict well, instead they do fast adaptation under uncertainty”). This is partly why I created the AI Field Reports topic.
How do you verify for yourself that technological obsolescence is slower than everyone makes it out to be?
- You can ask ChatGPT for a list of books with detailed stories of worker obsolescence. For instance, The Box talks about how the invention of the shipping container changed the world, but it also contains bits about how the container destroyed the dockworker areas in major port cities like San Francisco and New York (because containers are easier to load and unload and do not require hundreds of dockhands). You should look for stories like this, and pay special attention to the years in the narrative, keeping track of how long it takes for the change to work its way through until it impacts the dockyard worker. As you are reading, put yourself in the worker’s shoes, and keep asking “In X year, would I have known? Is the impending obsolescence clear by then? Should I have acted? If I had not acted, how long more before it would become too late?”
- Read The Illusion of Acceleration. This is a summary of Everett Roger’s The Diffusion of Innovations, which is apparently the classic text on the topic.
- The single book that you want to get to, though, is The Shock of the Old, by David Edgerton. I want to say a bit more about this.
The Shock of the Old contains a lot of shocking factoids, like how:
- There were more horses deployed in World War II than in any other war in history. These horses were deployed alongside tanks, and outnumbered the total number of horses deployed during Napoleon’s campaign and during the time of the Mongol hordes.
- Peak horse in Finland’s lumber industry was in 1950.
- More than a decade after the invention of the automobile, railroad companies owned 3x to 5x more horses than motorised vehicles.
- New steam-powered ships were built as late as the 1920s — even though gas powered motors were getting adopted en masse!
- The rickshaw spread across Asia in the 60s and 70s, the same decade of the Space Race in the US. Think about what that means: there were new rickshaw manufacturers who were expanding during the same time that man landed on the moon.
- During the PC boom of the 80s through to the 90s, people were predicting the end of books. They predicted the end of books during the Internet bubble (90s) and again during the mobile revolution (2010s). We still buy paper books today; the main threat seems to be the death of reading.
And so on so forth.
Why are these facts surprising to us?
Edgerton is a technology historian. He points out that whenever we go to a museum or watch a documentary, what we see is a history of inventions, not a history of use. So we all have this fake mental model of technological obsolescence in our heads: a new technology gets invented, and then we think everyone adopts it and the old tech dies a rapid death.
But Edgerton points out this is the exception, not the norm! And why this is the case is actually common sense. Technology is never used in a vacuum. It always exists in a sociotechnical system. That is, there are many years of maintenance know-how, training systems, regulations, unions or other organisations, existing processes, repair and maintenance supply chains all built up around the old technology. In order to change to a new technology, you also have to switch over all the other things around that technology. And that takes time.
And even in the fastest cases, it can take a few years. Meaning that there’s plenty of time for you to observe the change in the real world, and then switch based on your observations.
Ignore the speculation, and ignore the predictions. Most of them are going to be wrong. Just look at what is happening right now — you have plenty of time to adapt.
Originally published , last updated .
This article is part of the Expertise Acceleration topic cluster. Read more from this topic here→