by Jason Crawford · June 15, 2020 · 4 min read
Is progress slowing down?
Those who favor the stagnation hypothesis point out that, other than computing, most fields have seen only modest, incremental improvements. Our cars, planes, factories, food supply, antibiotics, and so forth are just improved versions of what we had in, say, 1960.
The counter-argument is: Wait, what do you mean, “other than computing?” How can you just ignore the area where we have seen revolutionary progress? Saying, “other than orders of magnitude increases in capacity, performance, and cost; revolutionizing all of communications; connecting every human being on Earth; and putting all the world’s knowledge and culture in every pocket; what has technology done lately?” sounds a lot like “what have the Romans ever done for us?”
The rebuttal to the counter-argument is: Computing is just one area. We used to have revolutionary changes going on in multiple areas at once:
1870–1920 saw the the electric generator, electric motor, and light bulb; the telephone, wireless, phonograph, and film; the first automobiles and airplanes, and the assembly lines to build them; the first synthetic plastic (Bakelite); the Panama Canal; the Haber-Bosch process; and the germ theory, along with its applications to public health.
1920–1970 saw radio, television, radar, and the first computers; the invention of nylon and other plastics; the expansion of mass manufacturing with an explosion of consumer products; penicillin and the golden age of antibiotics; Norman Borlaug’s Green Revolution in agriculture; nuclear power; the interstate highway system, jet airplanes, and the Moon landing.
1970–2020 saw … the PC, Internet, smartphone, and GPS; and genetic engineering, including GMOs. It also saw the Apollo project ended, the first supersonic passenger jet launched and then canceled, the promise of nuclear power unfulfilled, a War on Cancer with lackluster results, and similarly modest progress against heart disease. Yes, there were many incremental improvements in a variety of areas that are easy to forget about, but that’s the point—there were fewer revolutionary breakthroughs.
Another way of putting this is that if you look at a 1970 living room vs. a modern one, not much has visibly changed—again, other than the computer (and related technology, such as the big flat screen TV that has replaced your clunky CRT).
If progress is slowing down, what’s causing it?
On the other hand, when I brought up this idea on a panel discussion with Michael Nielsen, he rejected an oversimplified low-hanging fruit analysis, pointing out that when we discover new fields, such as computer science, they open up whole new orchards of low-hanging fruit. Or as he and Patrick Collison wrote in The Atlantic:
Suppose we think of science—the exploration of nature—as similar to the exploration of a new continent. In the early days, little is known. Explorers set out and discover major new features with ease. But gradually they fill in knowledge of the new continent. To make significant discoveries explorers must go to ever-more-remote areas, under ever-more-difficult conditions. Exploration gets harder. In this view, science is a limited frontier, requiring ever more effort to “fill in the map.” One day the map will be near complete, and science will largely be exhausted. In this view, any increase in the difficulty of discovery is intrinsic to the structure of scientific knowledge itself. …
But there’s a different point of view, a point of view in which science is an endless frontier, where there are always new phenomena to be discovered, and major new questions to be answered. …
… the optimistic view is that science is an endless frontier, and we will continue to discover and even create entirely new fields, with their own fundamental questions. If we see a slowing today, it is because science has remained too focused on established fields, where it’s becoming ever harder to make progress. We hope the future will see a more rapid proliferation of new fields, giving rise to major new questions. This is an opportunity for science to accelerate.
I think both of these questions can get confused if we don’t tease apart the S-curves.
Every technology, defined narrowly enough, goes through an S-curve: it starts out small, picks up steam, hits a hockey-stick inflection point, grows exponentially—and then starts to near saturation, slows down, levels off, plateaus. Electricity, for instance, went through an experimental/inventive phase for decades, grew rapidly starting in the 1880s, then leveled off in the early 20th century as power and lighting spread to the whole country. Today, with the power grid providing virtually universal coverage, electricity is not a high-growth industry.
Sustained growth over the long term, over centuries, comes from layering many S-curves on top of each other, something like this:
As one plateaus, investment and talent shift to new, more promising areas. If Edison and Westinghouse were young men today, they wouldn’t be working on electricity; they’d be hacking up cryptocurrency apps, or building self-driving cars, or tinkering with 3D printing. George Stephenson wouldn’t be building a locomotive named Rocket, but an actual rocket. Pasteur would be doing genetic engineering.
With this lens, we can bring the debates above into sharper focus. Rather than ask whether computing “counts” or whether we’re running out of low-hanging fruit, we can investigate, separately, the size and shape of individual S-curves, vs. the distribution of S-curves. We can ask questions like:
I’m not sure how to begin investigating these questions, since progress is notoriously tricky to measure, and I’m not even sure these are exactly the right questions, or which of them are even well-defined. But I think this is the sort of analysis we need if we’re going to resolve the big issues, instead of having unproductive debates over living rooms and the height of fruit.
« Stripe Press to provide science & engineering books to Progress Studies for Young Scholars Two interviews: Neoliberal Podcast, Mr. Bright Side »