Case
The Rise and Fall of Data General

In 1982, Tracy Kidder’s The Soul of a New Machine won a Pulitzer Prize for General Nonfiction. The New York Times wrote that the book was “about real people working on a real computer for a real company”, and asserted that Kidder had “elevated it to a high level of narrative art.”
Soul is a wonderful work of non-fiction. Ostensibly it is the story of a computer engineering team at Data General racing against time to design a next-generation computer. In truth, it was the story of a business that was starting to fail.

Data General was a hotshot minicomputer company that rose and fell with the minicomputer boom. To understand what that means, we need to talk a little about the history of the computer business.
Computers had been around since the 1940s, but for the first two decades of their commercial existence they were expensive, room-filling mainframes. In the post-WWII period this mainframe market was dominated by IBM.
IBM’s computers were built using vacuum tubes, and were sold to governments and large corporations in need of large-scale data processing. What kinds of data processing did companies and governments do back then? One early use of computers was to run the US Census. The 1880 census, for instance, took 1,495 clerks seven years to manually tabulate the final figures. For the 1890 census, this was deemed sufficiently torturous that a contract was awarded to a company for the use of an ‘electric tabulating system’ — a punched-card machine that could add up the census figures mechanically.
During World War II, a vacuum tube computer named the ENIAC was built and then used for all sorts of calculations in the American war effort — including, famously, some of the computations necessary for the Manhattan project. This vacuum tube technology was ultimately what IBM and its competitors sold in the post-war period.
In 1947, John Bardeen, William Shockley, and Walter Brattain — all scientists at Bell Labs — invented the transistor. The transistor had a number of advantages over vacuum tubes: they broke down less, were (in theory) cheaper to make, and were much smaller. But it took a long time for transistors to get adopted by computer manufacturers. Early transistors were less reliable compared to vacuum tubes and were difficult to produce.
Shockley quit Bell Labs in 1953 and started a company to improve the technology. He was, unfortunately, a terrible manager, and the resulting schism with his early employees resulted in eight of them leaving to form Fairchild Semiconductor. Decades later, this schism would be oft-cited as the event that spawned California’s Silicon Valley.
Over the next few years, Fairchild Semiconductor played a pivotal role in overcoming the drawbacks in the reliability and production of hardwired transistors. The approach Fairchild and its competitors picked was to shrink electronic circuits and place them on small pieces of semiconductor material. Texas Instruments (TI) was the first company to accomplish this, filing a 1959 patent on semiconductor ‘chips’. Unfortunately, TI’s first chips were still hardwired.
That same year, the metal-oxide-semiconductor field-effect transistor (MOSFET) was invented at Bell Labs. Their lower power consumption and higher density compared to point-contact and bipolar-junction transistors made it possible to house a computer’s arithmetic/logical, control, storage, input, and output functions on a single chip. Texas Instruments found themselves leapfrogged when Robert Noyce of Fairchild came up with ‘planar manufacturing’ — a way to embed wiring into the silicon as part of the semiconductor production process.
These two advances together were enough to overtake the hardwired transistor, and finally kill the vacuum tube. At the end of 1959 IBM announced the 1401, its first non-vacuum-valve computer. The product was ultimately very successful: by the mid-60s, over half of all computers in the world were 1401s. This was, if nothing else, an example of IBM’s sheer dominance.
It was at this point that a flood of new companies began to make cheaper ‘minicomputers’. These were smaller machines, taking advantage of the reduced cost and smaller size of these new transistors. By way of comparison, a typical IBM mainframe in the 60s and 70s cost a few million dollars and took up a whole room. Meanwhile, an early minicomputer would clock in somewhere between $50k to $200k, depending on configuration. (This was equivalent to around 400k to $1.6M in 2025 dollars — still not cheap by any means, but more affordable than any computer that ...
The rest of this article is for members only.