Note: This is Part 3 of a short series on sensemaking. You may read Part 2 here.
In Part 1 we discussed one way to make sense of AI without losing your head. In Part 2 we examined the Data-Frame theory for sensemaking, the best theory on sensemaking we currently have. In this third and final part, we’ll bring both pieces together: we’re going to update the ideas from Part 1 with what we now know about the sensemaking process.
Before we proceed, it’s worth talking about what our goals are. I think the goals I outlined in Part 1 are still valid: you want to be able to make sense of AI developments for your specific outcomes without becoming emotionally compromised. You don’t want to go on tilt. You don’t want to blindly affected by hype. But you also don’t want to bury your head in the sand, and be taken by surprise when developments outstrip your ability to make sense of them.
At this point in the series, you already know what sensemaking is, and how it works. You know how sensemaking differs between novices and experts. Most importantly for this piece, you know:
- How one major pitfall during sensemaking is frame fixation. This is especially problematic during a period of rapid change, like what we are currently facing with AI.
- You also know that you may improve your sensemaking skills by gathering fragments from related domains. I demonstrated this by giving you a fragment from the PC revolution at the end of the previous essay. I said that the fragment will likely change the way you make sense of this current AI shift — and perhaps you’ve already experienced this for yourself. Now, of course, the question becomes: how should you seek out and expand the number of case fragments in your head?
We’re going to address these two questions, in order.
In order to illustrate these ideas, I want to ground this piece in a real domain: the practice of software engineering. I have chosen this domain because it is the domain that AI has currently made the biggest impact. To be precise, I want to discuss an ongoing controversy between three groups of programmers, each operating from three different frames.
I should note that while this controversy is dear to me, it is not the main point of this essay. (I graduated with a Computer Science degree, worked as a software engineer for a few years; many of my friends are software engineers). This controversy is a snapshot of the industry I am — was — closest to, but it will likely take on different forms as AI spreads and impacts other fields. I’m merely using this controversy as an example because it is concrete, and because it is instructive to examine the industry’s various reactions to AI. The sensemaking response we’re going to see is fundamentally human, and therefore it is universal.
The AI Programming Controversy
You may skip this bit if you’re already familiar with the events I’m about to describe. In November 2025, knowledge that AI coding agents were good enough suddenly hit a tipping point. Within a span of weeks it seemed like everyone was giving Claude Code a go. Non technical folks were ‘vibe-coding’ throwaway apps; programmers from across the industry began using agentic coding tools to boost productivity.
An Anthropic ethnographic report from 2 December 2025 outlined some of the impacts experienced by Anthropic’s software engineers over the preceding six months. The report serves as a useful snapshot of the effects felt within the industry at the time. Within weeks, I had corroborated many of the claims from the report with posts from social media, from conversations with friends, and eventually from my own personal experience with the tools:
Engineers tend to delegate tasks that are easily verifiable, where they “can relatively easily sniff-check on correctness”, low-stakes (e.g. “throwaway debug or research code”), or boring (“The more excited I am to do the task, the more likely I am to not use Claude”). Many describe a trust progression, starting with simple tasks and gradually delegating more complex work—and while they’re currently keeping most design or “taste” tasks, this boundary is being renegotiated as models improve.
(…) Claude enables people to broaden their skills into more areas (of software engineering (“I can very capably work on front-end, or transactional databases... where previously I would've been scared to touch stuff”), but some employees are also concerned, paradoxically, about the atrophy of deeper skillsets required for both writing and critiquing code—“When producing output is so easy and fast, it gets harder and harder to actually take the time to learn something.”
(…) Some engineers embrace AI assistance and focus on outcomes (“I thought that I really enjoyed writing code, and I think instead I actually just enjoy what I get out of writing code”); others say that “there are certainly some parts of [writing code] that I miss.”
(…) Employees estimated that 27% of their Claude-assisted work wouldn't have been done without it. Engineers cited using AI for scaling projects, nice-to-haves (e.g. interactive data dashboards), useful but tedious work like documentation and testing, and exploratory work that wouldn't be cost-effective manually. As one person explained, they can now fix more “papercuts” that previously damaged quality of life, such as refactoring badly-structured code, or building “small tools that help accomplish another task faster.” We looked for this in our usage data analysis as well, and found that 8.6% of Claude Code tasks involve ‘papercut fixes.’
These responses were from a sample of Anthropic’s own software engineers, which meant that it was positively biased: if you work at a frontier AI lab, you are more likely to adopt AI technologies and you are also more likely to have a positive reaction to them.
However, over the course of the subsequent few months, broad AI coding agent adoption began to cause second and third-order effects throughout the industry:
- Senior developers began to grapple with less conscientious colleagues who would inflict large, unreviewed AI-generated pull requests on teammates. Many of these colleagues were more junior engineers, who are relatively new to the practice of software engineering, and are running wild with the new affordances of AI agents. Some devs have had to deal with AI-pilled bosses with subpar engineering skills spraying buggy slop all across their codebase. Collectively, the industry began talking about how ‘code review’ was rapidly becoming the bottleneck for software development.
- Open source projects began to be overwhelmed with ‘slop’ code contributions. One famous instance was the curl creator, Daniel Stenberg, “putting his foot down on all AI slop security reports” on 4th May 2025, as a result of a particularly bad submission. Just a few months later though, Stenberg reported a successful AI-augmented security report from Joshua Rogers. “Mostly smaller bugs, but still bugs and there could be one or two actual security flaws in there. Actually truly awesome findings (…) I have already landed 22(!) bugfixes thanks to this.” On the other hand, the Zig project announced ‘the most stringent anti-LLM policy of any major open source project’ — and for good reason.
- Prominent developers began publishing field reports about how AI coding agents have accelerated their development work — and in some cases made possible bug fixes and feature implementations they would’ve never considered, since these would have taken many times the amount of time.
- Tech companies began to announce layoffs under the banner of AI adoption. The most prominent of this was Block, which announced a 40% cut in late Feb 2026, purportedly as a result of AI productivity gains. This was roundly criticised as cover for historically terrible business execution.
- The public valuations of SaaS (Software-as-a-Service) companies crashed over the course of February 2026 in reaction to the advancement of AI coding agents.
- Cloudflare published a report on 24th February 2026, announcing that they had successfully cloned Next.js, a popular open source React framework that is owned by Vercel, a competitor, in ‘one week and with $1000 worth of tokens’. The new framework had zero dependencies on Vercel’s platform. This was notable because it eroded one of Vercel’s core competitive advantages.
- Somewhat related to Cloudflare’s move, AI-augmented ‘clean room’ software reimplementations began to emerge. This was alarming because such reimplementations could be released under a different license, allowing companies to get around software license restrictions.
In response to this spread, software engineers began to fracture into three groups.
The first group is the “never AIs”. This group consists of folks who have attempted to use AI for computer programming in the past, and have concluded that it cannot work. Perhaps they reject it for ethical reasons. Perhaps they work in companies with sloppy AI-use mandates, and they resist bad technology being foisted onto them. Or perhaps they cannot see a path for this fundamentally probabilistic technology to lead to real solutions. For whatever reason, they resist AI use and are justified in doing so.
The second group consists of ‘pragmatic AI adopters’. These software engineers see themselves as grounded folks who need to get stuff done. They use AI tools, but they continue to hold on to existing software development best practices. “The fundamentals have not changed” they say, “You still have to review every line of code you generate, because you are ultimately responsible for it.” They link to reports that show that software teams with good practices benefit more from AI than those with terrible software engineering practices. They heap scorn on those who hype AI uncritically. They have used enough AI to know that AI works well when applied in such-and-such manner, but frontier models continue to fail miserably on a range of idiosyncratic tasks. These field reports reflect the constraints of the real codebases they labour under.
You may or may not have noticed this, but every single field report I linked to, above, is a report from one of these folks. Mitchell Hashimoto (the founder of Hashicorp), Armin Ronacher (the creator of Flask and founder of the Pallets project), and Salvatore Sanfilippo (antirez, the creator of Redis) all belong to this group. They inhabit a frame where adopting AI is inevitable, but there is no need to radically update one’s understanding of the Software Development Life Cycle (SLDC).
I identify the most with this second group. I was trained as a software engineer, and whilst I am not believable on the discipline (I was only ever a software engineer for a handful of years before becoming a business manager), the frame that these folks inhabit are familiar and comfortable. I like the idea of writing clean, beautiful code. I like it even though I have written literally thousands of lines of messy code under the weight of crushing deadlines, often casting aside my manager hat in order to ship certain deliverables on time. By my own calculations, my shit code has generated millions of dollars of profit for my old company — which should make the businessperson in me feel good. In truth, however, I mostly feel icky. My frame is “the collective knowledge of good software engineering is real, and it is worthwhile to make good software.”
And so it is the frame of a third group that has posed a challenge for me. Some folks in Commoncog’s private members forum call this the ‘software dark factory folks’. Folks in this group believe that it is possible to have AI coding agents write code with little to no human intervention. That is — AI agents generate code at a high velocity, AI agents review the code, and AI agents ship the code, with no human in the loop. The approach calls for “building the machine that builds the software.” I’ve linked to a few of their field reports before. For instance, on 29th September 2025, Microsoft Deputy CTO Sam Schillace published I Have Seen The Compounding Teams — which asserted that the ‘dark factory’ pattern was possible, though it takes the average team six months to get to this point. He followed this up with throwaway remarks on 16th March 2026 (The Rise of Taste) and 23rd March 2026 (Why Not vs What If) — roughly six months later, saying that he’d gotten Microsoft Research’s internal ‘dark factory’ agent harness working for himself. On February 11 2026, OpenAI published Harness engineering: leveraging Codex in an agent-first world. If you read that post carefully, you’ll notice lots of odd little details required to get such an approach to work, justifying Shillace’s ‘six months’ observation. And there are other such reports if you know how to look, sometimes from folks who are cautiously experimenting with the idea (Justin Cormack, Adam Jacob, Simon Willison on StrongDM).
What is really clear is that these folks are inhabiting a radically different frame compared to the second group. More importantly, folks in the second group cannot accept the frame inhabited by the software dark factory folks.
I’m not the only person to have noticed the existence of these three groups, of course. The aforementioned Adam Jacob did a podcast recently where he described the three groups in nearly the same terminology, and reflected a little on how he relates to each of them.
And I think it’s instructive to take a look at what Jacob has been saying, and how other software engineers from the ‘pragmatic AI adopters’ group are reacting to him. Here’s a March 17th 2026 LinkedIn post (all bold emphasis mine):
I’ve seen a couple things floating around to the effect that “I’ve been using Claude and I’m not more productive than I was before, because it writes unmaintainable code.” I resonate with that experience — it’s the same one I was having for months, where what felt true was that this was good for prototypes, but fell apart under its own weight inevitably. If you’re using Claude Code/Codex/Cursor on a large existing code base, built by hand through careful crafting by a team, over the course of years (decades) — this is almost certainly your experience.
Contrast that with our experience building Swamp this way from scratch. We’re paying a lot of attention to architecture. We’re paying very little attention to the code. We’re using agents at every stage of the SDLC — we have skills that express our architecture patterns, design documents, adversarial review, comprehensive UAT. It’s the highest trust team environment I’ve ever been in, because each person is capable of shipping basically anything they want in a timeframe that feels bonkers (we shipped remote execution for workflow jobs in an afternoon, for example.) We spend as much time automating the SDLC as we do writing the product (I expect this to slow down eventually)
Today it’s hard to see how existing teams move to work like we’re working. Tomorrow it won’t be. Don’t fall into the false comfort of believing that because your current situation (socially, technically) makes this shift hard to do, it isn’t coming to your team eventually, or won’t work at scale. We will figure all of those things out as an industry.
The best thing to do today is experience it for yourself. It’s too early to solidify a pattern, or to claim perfect knowledge of the end shape. It’s changing every day. But consensus will emerge, because there won’t be 1000 working patterns.
If you feel some discomfort with his remarks, you’re not alone. From the comments to that post:
This is your opinion and it is wrong.
“it doesn't work but tomorrow it will but I'm going to offer no evidence”?
Are you suggesting that over the last two decades no company who is writing software has scaled? Because your claim is “if you’re not doing it precisely the way we’re doing it now, you won’t scale”… which… seems like perhaps too broad a claim.
These are comments from folks who inhabit the ‘pragmatic software engineer’ frame, and they cannot accept anything Jacob is saying. Why? This is quite easy to understand, because we now have the language of the Data-Frame theory of sensemaking. Humans construct data within the context of a frame. Nothing Jacob says makes sense within the existing software engineering frame (see all the bolded bits above). The only way to understand Jacob (and Shillace, and others) is to construct a new frame — the one that they’re operating in. You’re going to need to find a few new anchors to construct their frame in order to accept what they’re describing.
And you can already guess what I’m going to say. The danger is that it turns out this third group is right. If software dark factories are possible, then the entire practice of software engineering is going to change. But how can you take the software dark factory folks seriously when you encounter incredibly stupid coding agent behaviours in your own day-to-day work? Nothing they say lines up with your own frame; all their ‘data points’ are off-the-cuff remarks that may be rejected due to your own lived experiences.
“They’re lying,” you think to yourself. “They can’t possibly be right; their brains are eaten by hype.” And so you ignore them and stick to your existing frame.
Technique One: Fill in an Alternate Frame
Avoiding frame fixation sounds easy when it’s about another person’s domain. It’s less easy when a) it’s about your own domain, b) when the new frame goes against everything you believe about your own hard-won expertise, and c) when the new frame is fundamentally uncertain.
I want to talk about that last point for a bit. We do not know if these ‘software dark factories’ are possible. I’m not saying that the folks writing field reports are lying, or that the benefits they’re already seeing are fake. I’m saying that we can’t know what the tradeoffs are, and where the limits of this approach lies. Nobody can. This is a new technology with new affordances. Nobody can know what’s possible here. This is what uncertainty feels like.
But I think it’s also true that you need to take this ‘dark factory’ frame seriously. There are enough field reports now from enough unrelated people that indicate that something is going on. More importantly, the potential impact on your career — if you are a software engineer — is too large to ignore.
Thankfully, the Data-Frame theory already offers us one way out: you don’t have to believe their frame. You may hold on to your current frame, and elaborate a second frame in parallel. Folks who believe humans are Bayesian updaters won’t have this cognitive move available to them — they will think that the way to integrate new data is to do careful updating on their priors, with a belief score assigned to new developments. But if you take the Data-Frame theory seriously, you’ll know that you can do what many experts across different domains do: hold an alternative frame in abeyance, and use the human brain’s natural tendency for confirmation bias to elaborate said frame. You may then switch to (or discard) the second frame later if you wish.
The thing that did it for me was a senior software engineer taking me aside and saying “Cedric, I don’t think you’re taking the implications of cheap code seriously enough.” That disequilibriated me sufficiently to rethink my beliefs. If we translate what this engineer was saying to the language of the Data-Frame theory, what he was saying was that one of the anchors of my current frame was now invalidated. And indeed, he continued: “So many of our existing practices are built around the idea that code is expensive to write. Code is cheap now. What does that change?”
Once you discard that anchor, it becomes easier to construct a new frame. This happened to me around three months ago. And then I started noticing something interesting.
Nearly every software engineer who was open to this new frame had a prior professional experience where a core anchor in their domain was invalidated. In March, I interviewed a handful of senior engineers who were experimenting with this third frame. One of them told me: “When I was younger and the cloud was starting to take off, there was this new belief that servers could be treated as discardable. You know (the saying) ‘cattle not pets’? Well, the first time I heard this, I didn’t understand how it could be possible. It was crazy! Then we learnt about Netflix’s chaos engineering — and that seemed crazy! And yet it was possible, and it changed the way we did devops over the subsequent decade.”
I was in university whilst this transition was happening, so I wasn’t affected by it; 'cattle not pets’ had already become widespread when I graduated. The idea went something like this: once upon a time, servers — the computers that ran your websites, or your enterprise software — were expensive to provision. As a result, most servers were long-lived. The sysadmins who managed such servers would often run one-off maintenance scripts, or — hell — log into those servers to run maintenance commands manually. Over time, most servers became unique creatures that carried idiosyncratic configurations. It became scary to migrate off them, and it was often a major ceremony to provision new servers.
Then, in 2006, Amazon launched AWS. What we now know as ‘the cloud’ emerged over the next decade. With the cloud came a new frame: servers were cheap to provision. Hell, with AWS you could detect that your servers were overloaded, and programmatically spin up new servers on the fly. You could also reverse this: detect that load had gone down and spin those servers down again. Hence “cattle not pets” — treat your servers as discardable cattle, not unique pets.
This change created an entirely different way to think about software deployments. It saw the rise of new tooling which enabled large scale server orchestration. It changed the contract between software developers and sysadmins. It enabled crazy new approaches to resilience, like Netflix’s Chaos Monkey — which was software that would randomly turn off servers within Netflix’s production environment, so that software developers would be forced to build resilient software.
Everything I’m describing sounds trite today, but it was unimaginable just a few years earlier. The movement itself took five to seven years for the transition to spread throughout the entire software industry. Some folks worked in organisations that could adopt this new way of doing things quickly. Others found it harder. As with all socio-technical shifts, the social bits of the shift takes longer than you might expect.
Many of the names I’ve cited above have come from this transition:
- Adam Jacob created Chef (the DevOps automation and server infrastructure management software) and was the co-founder of Opsware, the company that now controls Chef.
- Justin Cormack was CTO of Docker, and oversaw the transition from VMs to containers as the unit of software deployment. (His attempt at replicating Amazon’s object store service, S3, using ‘software dark factory methods’ is linked above. I suspect he’s elaborating a parallel frame with that project — he hasn’t fully committed to the software dark factory frame but wants to see what the limits are.)
- Sam Shillace, Microsoft’s Deputy CTO, did not come from the DevOps transition, but created the software that eventually became Google Docs. He writes, in a recent post reflecting on this AI transition: “I had the privilege of being on the founding team of what became Google Docs. It was an interesting experience in many ways, but one of them was experiencing firsthand the dissonance between people telling me it was a terrible idea (not just Writely, but the cloud overall), and the half million or so people who were using and loving it.”
What is common here? It is this: they all experienced a transition where one of the anchors of their previous frame had been invalidated. And then their careers benefited massively from the transition. As a result, they are now pattern matching against that structure and actively investigating this opportunity, hoping to benefit from another large transition.
And if you know how to look, you’ll realise that they say as much, if with different words. Here is Marc Brooker, another senior engineer who experienced the transition from server to cloud to serverless (he is a distinguished engineer at AWS; he built AWS Lambda, led the team that released Aurora DSQL, and helped create the Firecracker VMM):
Many of the heuristics that we’ve developed over our careers as software engineers are no longer correct. Not all of them. But many. What it means for a system to be maintainable. How much it costs to write code versus integrate libraries versus take service dependencies. What it means for an API to be well designed, or ergonomic, or usable. What it means to understand code. Where service boundaries should be. Where security and data integrity should be enforced. What’s easy. What’s hard.
We’ve seen this play out in small ways before. Over the last decade, I’ve frequently been frustrated by experienced folks who didn’t update their system design heuristics to match the cloud, to match SSDs, to match 100Gb/s networks, and so on. But this is the biggest change I’ve seen in my career by far. An extinction-level event for rules of thumb.
If you take this frame seriously, what do you get? You get something like the following:
- Code generation is cheap now. What changes to the SDLC can we make to fully leverage this?
- We don’t know where the optimal tradeoffs lie. Initial attempts all seem to trade away human readability (and code quality measures that correlate to human readability) in favour of program correctness and development velocity.
- We get there by a) not allowing human developers to touch the code, only the agent harness, and b) by limiting the degrees of freedom available to the AI. How to best limit the degrees of freedom available to the agent is a matter of ongoing discovery. (See Zero-Degree-of-Freedom LLM Coding using Executable Oracles by John Regehr for an outline of this approach)
- Since code generation is cheap, many of these projects seem to expect 100% test coverage as a minimum. Depending on the team, there’s also interest in lightweight formal methods, property-based testing (see the launch of Hegel), and deterministic simulation testing (see Antithesis).
- All documentation is checked into repo, in the form of greppable specs that are broken down by concern in a top-level table of contents (so as to not overload the context window).
- The AI agent should be able to check their work and self correct. That means access to a web browser (for the user-facing parts of a web app), and server logs (to correct for bugs in deployment).
- To improve agent performance, the codebase is structured with strict boundaries and predictable structure. This structure is enforced with custom-generated linters and tests to ensure that data-flow doesn’t violate the chosen architecture. The OpenAI approach seems to be to enforce a unidirectional data flow. Data is always parsed at the boundary (an idea stolen from the Haskell community), and then data flow can only go in one direction in the codebase. It appears that you can get quite creative with how to enforce these invariants. From the OpenAI report: ’In practice, we enforce these rules with custom linters and structural tests, plus a small set of “taste invariants.” For example, we statically enforce structured logging, naming conventions for schemas and types, file size limits, and platform-specific reliability requirements with custom lints. Because the lints are custom, we write the error messages to inject remediation instructions into agent context.’
- Agents seem to replicate patterns that are already found in the codebase, so there needs to be a series of automated runs to clean up AI slop. These run autonomously, during lull periods, on a regular cadence.
Notice that one core anchor for this frame is simply: code generation is cheap. This means writing tests is cheap, using formal methods is cheap, creating linters and lightweight agents to enforce more subjective invariants is cheap. How do we compose these methods together to get high development velocity with high correctness and a tiny team? What are the limits of this approach? We don’t know, but these teams are finding out.
Technique Two: Read Takes to Infer the Frame
In Part 1 I wrote that you should stop reading takes, because they are of limited use — first, they are likely to be written for audience-building purposes (which benefits the author more than it benefits you), second, many of them are written for self-soothing reasons, and third, reading takes is not a good use of your time if your goal is to sensemake for the outcomes you care about.
Now that we understand the Data-Frame theory, however, we may reintroduce takes into our information diet, albeit in a limited manner. The reason is that we now have a new tool in our sensemaking toolbox: we may use takes as a way to infer new frames.
Try this: when you are reading a take, ask yourself “what frame is this author operating from?” Sometimes the answer is obvious, in which case you may skim the rest of the piece and toss it. But occasionally you might feel confused — you might discover that you are not able to answer this question with confidence. If you assume that the author you are reading is not dumb, it might be worth it to investigate the frame they are operating from. In fact, I argue that you absolutely should do so if a) it doesn’t take much work to generate this information, and b) the outcomes implied by this unknown frame might have implications for things that you care about.
Here’s a concrete example.
Gabriella Gonzalez is a famous Haskell programmer. She is believable in every sense of the word. On 17th March 2026 she published an essay titled A Sufficiently Detailed Spec is Code.
In the essay Gonzalez pushes back on the idea — currently publicised by ‘agentic coding advocates’ — that you can generate code purely from writing specification documents. Her argument references the famous Dijkstra observation that any attempt at using natural languages (such as English) to write computer programs is doomed to failure. From this, Gonzalez argues that spec-first programming cannot possibly deliver the benefits their promoters are selling because a) specification documents are not simpler than the resulting code (meaning they are not cheaper to write), and b) specification work is supposed to be more thoughtful than coding work — but if they are AI-generated, they will result in slop. After all, if you feed slop specs to an AI coding agent, you shouldn’t be surprised if you get slop code out the other end.
I will admit that I skimmed Gonzalez’s piece — not because I disagreed with it, but because I agreed. And of course I would. I first encountered Dijkstra’s argument whilst doing my computer science degree (I specialised in programming languages). I remember being persuaded then. Seen in this light, there was nothing new for me in Gonzalez’s take — she was writing from the perspective of the ‘pragmatic software engineer’ frame (albeit from a position of significant authority), and she was citing literature that is established canon in our field. I skimmed her piece and set it aside; there was little sensemaking value to examine arguments that I already agreed with, in a frame that I already inhabited.
Around the same time, I stumbled onto Hrishi Olickel’s RISC Won: Building Towards Data AGI. The article describes a year-long attempt at building a data analysis agent harness. Olickel’s account is notable because his journey begins before Claude Code is released, and much of the trial and error occurs before Claude Code becomes popular. This means much of the experimentation goes down paths that we know are agent harness dead ends today.
In the piece is this paragraph:
The start of Q3 (June) is when we switch completely from writing code to writing specs. Almost all product development from this point on gets done by writing large, 5-10 thousand word specifications. We're betting that by Opus 4, code can be generated as needed, and iterations can happen faster and more collaboratively over English specifications.
This bet pays off.
We build specs for things we need internally that later models are able to simply one-shot.
Now this was a perspective I did not understand. What benefits would come from writing and iterating on specs so you may one shot later? Sure … I buy that AI models are improving over time, but why would you want to throw generated code out after every model improvement?
If I was fully committed to the previous ‘pragmatic software engineer’ frame, I would have ignored this piece of data as “silly” or “not important to investigate”. But as mentioned earlier, I was open to the idea that code generation is now cheap and new affordances might exist as a result of this new anchor. In such a scenario is it valuable to pay attention to experimentation. It was also clear how this frame might be valuable to me: I run a business, I occasionally consult for others, and I hire software engineers. It is not difficult to imagine how I might profit from Olickel’s workflow — especially if I deploy it in various business experiments.
Once you are open to the existence of a new frame, you may begin seeking out others who inhabit it. It didn’t take me long to stumble upon Marc Brooker’s Spec Driven Development Isn’t Waterfall. Notice that this is another take, though on the opposite side of Gonzalez’s. This shouldn’t be surprising, given Brooker’s previous frame. He writes:
This approach has several advantages which I’ve written about in the past: keeping context on the bigger picture (a map, versus the turn-by-turn directions of vibe coding prompts), the ability to mix levels of formality and detail to meet the needs of a particular piece of software, serving as always-in-sync documentation, allowing implementation of the same code in multiple languages or with multiple frameworks, and the ability to lift what matters out of the muck of the implementation. One advantage, though, is looking to override all of these in importance: we’re seeing the largest improvements in velocity and delivery in teams and processes that can allow agents to run autonomously for long periods of time. Specifications do exactly that. By providing the agent with a clear map, we can set an agent off building without a human inside the tight loop of development and testing. The agent can also write higher quality, better designed, and better tested code by seeing the big picture. It knows what to test, and what good looks like.
Specifications aren’t up-front designs because you don’t need to, and probably shouldn’t, develop the entire specification upfront. Instead, specifications should be at the core of an iterative software development practice. Humans are still critical to this outer loop of software development, driven by refining and extending the specification. Perhaps most crucially, they own the internally conflicting nature of software requirements. Where conflicts and trade-offs exist, either technical or in product requirements, expertise and experience come into play.
If Brooker too, has this frame, then it is likely that other senior engineers across the industry have also constructed the same frame. You may hunt for them.
Finally, it is not expensive for me to generate more information about this frame. Olickel attended the National University of Singapore, my alma-mater. I helped create the largest hacker club on campus. It would’ve been possible to work that network to set up a conversation with him in order to ask, respectfully, about his experiences. And even if that were not possible, it would not be too difficult to send a cold email!
(In the end, I had no need to do that — a Commoncog reader set up a call with Olickel and I asked them to ask questions on my behalf.)
But here is my point: this is a concrete example of using a take as signal of a new frame. Constructing that frame becomes easier once you detect its existence. Notice how the reward-effort tradeoff was high enough that it was well worth my time to investigate — and indeed I spent a fair bit of time reading the documentation of Hankweave, the harness Olickel released. It is in this manner that takes may still be useful.
Let me be clear, though: takes are still not valuable in terms of direct information content — at least not from a sensemaking perspective. I didn’t spend much time on Gonzalez’s take, becuase I already understood the frame it was from. But takes are useful if you treat them as sources of embedded frames.
Just … read them sparingly.
Technique Three: Collect Fragments from Prior Technological Shifts
In the previous instalment I argued that it is important to collect fragments from related domains, in order to improve at frame construction. Of course, a natural question is a) what fragments should you collect, and b) how do you find them?
Given that we’re talking about AI here, a natural source of fragments are past technological revolutions. Specifically, we want to look for:
- Examples of companies that won (or lost) as the result of the new technology.
- Stories from the diffusion of these new technologies. Specifically: what changed, how long did the changes take, and how were various parts of society affected?
You may want to supplement these with other fragments, specific to your industry and to your personal situation. This doesn’t have to take a huge amount of time — you could skim through biographies, or simply ask older folks to tell you the story of their experiences.
At this point in the series I don’t have to explain why you might want to collect such stories. Of course, the common retort is “Why are you reading history? AI is a fundamentally new technology and it is unlike every other technological revolution that has come before.” But you are not collecting these fragments for naive pattern matching. You are collecting these fragments for frame construction. Ironically, frame construction enables the kinds of advanced pattern matching that experts do. Naturalistic Decision Making researcher Jared Peterson likes to say (my paraphrase) “expertise is fundamentally pattern matching, and experts are able to pattern match problems that they’ve never seen before.” How? Well, you already know how: the Data-Frame theory gives you the mechanism.
With this in mind, you might see why I suggested the two categories of fragments above:
- You’ll want to look for stories of technologically-enabled competitive advantage because new technology is both an opportunity and a threat. How have companies adapted to the affordances of new technology in the past? What were the problems? What did that look like?
- You’ll also want to ground your perspectives in real stories of societal disruption. AI prognosticators tend to throw around simple stories of prior technological displacement. “We invented the automobile and then horses vanished … don’t be like the horse” they write. Such prognosticators are then surprised when they learn that more horses were deployed in WWII than in any other war in history, far outstripping the horses deployed in the Napoleonic wars, for instance. They might also be surprised to learn that railroad companies — the high technology companies of their day — employed 100 times more horses than automobiles two decades after the founding of the Ford Motor Company (and five decades after the invention of the automobile). This occured even as automobiles began displacing horses for public (and private) transportation in the rich capitals of the world, starting from 1917 onwards. The point is that technological diffusion is a lot weirder than you might think.
The good news is that you only need a couple of fragments in your head. I would aim for 10-20 calibrating cases. It may take a bit of effort to seek out fragments of cases, though. One good place to start is The Shock of the Old by technology historian David Edgerton. (Both horse anecdotes are taken from Edgerton’s book). Another good book to skim for fragments is Engines That Move Markets by Alasdair Nairn.
But we intend to help you with this. Over the course of this year, we at Commoncog will begin seeking out, researching, and then publishing fragments in both categories. This service is intended for members; many cases will be published behind the paywall.
If you have suggestions for books or cases please ping us in the forum.
Wrapping Up
What have I shown you?
I’ve made three arguments in this essay:
- First, in order to sensemake effectively, you’re going to have to commit to alternative frames that you might not necessarily agree with. It is tempting to stick to your current frame, and to reject all new developments as transient. But this is dangerous if the outcomes implied by these new developments may affect your livelihood. (Conversely, major shifts can often represent opportunities to advance your career, as mentioned earlier). I suggested two strategies to make it easier to commit to alternative frames: first, engaging in the reframing cycle does not mean you have to give up your current frame. You may continue to believe what you currently believe and elaborate an alternative frame in parallel; you should find this quite natural to do. Second, you should look for anchors that will allow you to construct that alternative frame (again, without necessarily accepting it). I gave an example of committing to the frame of ‘autonomous AI coding agents are possible’ when a senior engineer told me that I was not taking the notion of ‘code is cheap’ seriously enough. That anchor happened to work for me; it might not for you. The goal is to find one that works — again, assuming that the potential impacts are serious enough that you are motivated to do so. This brings us to …
- It is acceptable to read takes … if you are doing it for frame construction purposes. This is a nuanced point. The vast majority of predictions you read are still going to be wrong. During a technological shift, you will still encounter many writers who are opining on the ‘current thing’ for attention. Others will write for self-soothing reasons. These facts are not going to change. But you now have a new tool in your toolbox: instead of reading these opinions at face value, you may read them for frame construction reasons. You may ask: what frame are they operating from? Why do they hold such a frame? Are there data points that they’re using as anchors that I’m not seeing? Of course, you do not have to accept their frame, in the same way that you do not have to take their opinions seriously. But you may proceed to evaluate the frame they’re operating in.
- Finally, you may improve your sensemaking ability by collecting fragments from prior technological shifts. The goal is to calibrate your expectations of how change has happened in the past, so you are not tricked by ungrounded speculation in the present. Specifically, I proposed that you want to collect two types of fragments: first, stories of companies building competitive advantage and benefiting from the new affordances of the new technology. Second, stories of societal impacts and broader change as a result of new technologies. In both cases you want to pay special attention to the timelines involved.
There’s nothing magical about better sensemaking. In truth, the various cycles of the Data-Frame theory are simply part and parcel of being human. You likely already do some form of this in some aspects of your life. The only thing we’ve accomplished here is to draw out some common sense implications of the Data-Frame theory, and we have applied them to the ongoing AI wave, a consequential new technology.
The techniques outlined here may be adapted for other things in your business or in your life. Hopefully, after reading this series, you’ll know how to do just that. But on the topic of AI, at least, you are better equipped to sensemake the rapid changes that are happening here.
Hopefully, you won’t lose your head.
Originally published , last updated .
This article is part of the Expertise Acceleration topic cluster. Read more from this topic here→
This article is part of the Operations topic cluster, which belongs to the Business Expertise Triad. Read more from this topic here→
The thought of business school make you go ‘eww’?
You’re in good company.
9,000+ investors and operators read Commoncog to sharpen their business acumen ... WITHOUT going back to school.
Sign up for our newsletter and get a weekly dose of good business thinking (no BS guaranteed):