Technology drives change. Does it also drive progress?

Those eight words sum up a lot of the conversation going on in society at the moment. Some serious head-scratching about the whole relationship between “technology” and “progress” seems like a good idea.

In Part 1, I summarized “four naïveties” that commonly slip into techno-optimistic views of the future. Such views gloss over: (1) how technology is erasing the low-skilled jobs that, in the past, have helped poor countries to develop (e.g. China); (2) how, in a global war for talent, poorer communities struggle to hold onto the tech skills they need; (3) how not just technology, but politics, decides whether technological change makes people better off; and (4) how every technology is not just a solution, but also a new set of problems that society must manage well in order to realize net gains.

Technology = Progress?

The deepest naïveté—the belief lurking in the background of all the above—is that technological change is a good thing.

This is one of the Biggest Ideas of our time—and also one of the least questioned…

It wasn’t always so obviously true. In 1945, J. Robert Oppenheimer, upon witnessing a nuclear explosion at the Manhattan Project’s New Mexico test site, marked the moment with a dystopian quote from the Bhagavad Gita: “I am become death, destroyer of worlds.”

But within ten years, and despite the horrors of Hiroshima and Nagasaki, a far more utopian spin on the Atomic Age had emerged. Lewis Strauss, architect of the U.S. “Atoms for Peace” Program and one of the founding members of the Atomic Energy Commission, proclaimed in 1954 that:

It is not too much to expect that our children will enjoy in their homes electrical energy too cheap to meter, and will know of great periodic famines in the world only as matters of history. They will travel effortlessly over the seas and under them, and through the air with a minimum of danger and at great speeds. They will experience a life span far longer than ours as disease yields its secrets and man comes to understand what causes him to age.

What happened in the years between those two statements to flip the script from techno-dystopia to techno-utopia?

Wartime state-sponsored innovation yielded not only the atomic bomb, but: better pesticides and antibiotics; advances in aviation and the invention of radar; plastics and synthetic fibers; fertilizers and new plant varieties; and of course, nuclear energy.

Out of these achievements, a powerful idea took hold, in countries around the world: science and technology meant progress.

In the U.S., that idea became official government dogma almost immediately after the war. In a famous report,Science: The Endless Frontier, Vannevar Bush (chief presidential science advisor during WWII, leader of the country’s wartime R&D effort and founder of the U.S. arms manufacturer Raytheon) made the case to the White House that (a) the same public funding of sciences that had helped win the war would, if sustained during peace-time, lift society to dizzying new heights of health, prosperity and employment. It also warned that (b) “without scientific progress, no amount of achievement in other directions can insure our health, prosperity and security as a nation in the modern world.” But Vannevar also framed the public funding of scientific and technological research as a moral imperative:

It has been basic United States policy that Government should foster the opening of new frontiers. It opened the seas to clipper ships and furnished land for pioneers. Although these frontiers have more or less disappeared, the frontier of science remains. It is in keeping with the American tradition—one which has made the United States great—that new frontiers shall be made accessible for development by all American citizens.
Moreover, since health, well-being and security are proper concerns of Government, scientific progress is, and must be, of vital interest to Government. Without scientific progress the national health would deteriorate; without scientific progress we could not hope for improvement in our standard of living or for an increased number of jobs for our citizens; and without scientific progress we could not have maintained our liberties against tyranny.

In short, science and technology = progress (and if you don’t think that, there’s something unpatriotic—and morally wrong—about your thinking).

The High Priests of Science & Technology Have Made Believers Of Us All

In every decade since, many of the most celebrated, most influential voices in popular culture have been those who repeated and renewed this basic article of faith—in the language of the latest scientific discovery or technological marvel. E.g.,

1960s: John F. Kennedy’s moonshot for space exploration; Gordon Moore’s Law of exponential growth in computing power; the 1964-65 New York World’s Fair (which featured future-oriented exhibits like Bell Telephone’s PicturePhone and General Motors’ Futurama)

1970s: Alvin Toffler’s Future Shock, which argued that technology was now the primary driver of history; Carl Sagan, who argued that scientific discovery (specifically, in astronomy) reveals to us the most important truths of the human condition; Buckminster Fuller, who argued that breakthroughs in chemistry, engineering and manufacturing would ensure humanity’s survival on “Spaceship Earth”

We can make all of humanity successful through science’s world-engulfing industrial evolution. – Buckminster Fuller, Operating Manual for Spaceship Earth (1968)

1980s: Steve Jobs, who popularized the personal computer (the Mac) as a tool for self-empowerment, self-expression and self-liberation (hence, Apple’s iconic ”1984” TV advertisement); Erik Drexler, the MIT engineer whose 1986 book Engines of Creation: The Coming Era of Nanotechnology, imagined a future free from want because we’ll be able to assemble anything and everything we need, atom-by-atom; Hans Moravec, an early AI researcher whose 1988 book, Mind Children, applied Moore’s Law to the emerging field of robotics and neuroscience and predicted that humanity would possess godlike powers of Creation-with-a-capital-C by 2040. Our robots would take our place as Earth’s most intelligent species.

1990s: Bill Gates, whose vision of “a computer on every desktop” equated improved access to Microsoft software with improvements in human well-being; Ray Kurzweil, another AI pioneer, who argued in Age of Intelligent Machines (1990), Age of Spiritual Machines (1999) and The Singularity is Near (2005) that the essence of what makes us human is to reach beyond our limits. It is therefore inevitable that science and technology will eventually accomplish the next step in human evolution: the transhuman. By merging the “wetware” of human consciousness with computer hardware and software, we will transcend the biological limits of brainpower and lifespan.

2000s: Sergey Brin and Larry Page, who convinced us that by organizing the world’s information, Google could help humanity break through the barrier of ignorance that stands between us and the benefits that knowledge can bring; Steve Jobs (again), who popularized the smartphone as a tool of self-empowerment, self-expression and self-liberation (again), by making it possible for everyone to digitize everything we see, say, hear and touch when we’re not at our desks.

2010s: Mark Zuckerberg, who, in his Facebook manifesto, positions his company’s social networking technology as necessary for human progress to continue:

Our greatest opportunities are now global—like spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science. Our greatest challenges also need global responses—like ending terrorism, fighting climate change, and preventing pandemics. Progress now requires humanity coming together not just as cities or nations, but also as a global community…Facebook develops the social infrastructure to give people the power to build a global community that works for all of us.

(Facebook, apparently, is the technology that will redeem us all from our moral failure to widen our ‘circle of compassion’ [as Albert Einstein called it] toward one another.)

Elon Musk likewise frames his SpaceX ‘Mars-shot’ as necessary. How else will humanity ever escape the limits of Spaceship Earth? (Seventy-five years after Vannevar’s Endless Frontiers report, we now take for granted that “escaping” such “limits” is the proper goal of science—and by extension, of society.)

And last (for now, at least), Yuval Harari, whose book, Homo Deus: A Brief History of Tomorrow, says it all in the title.

Science and technology is the engine of human progress. That idea has become so obviously true to modern minds that we no longer recognize it for what it really is: modernity’s single most debatable premise.

Rather than debate this premise—a debate which itself offers dizzying possibilities of progress, in multiple dimensions, by multiple actors—we quite often take it as gospel.

Rather than debate this premise, Yuval instead takes it to its ultimate conclusion, and speaks loudly the question that the whole line of High Priests before him quietly whispered: Do our powers of science and technology make us gods?

It is the same question that Oppenheimer voiced in 1945, only now it’s been purified of all fear and doubt.

We Can Make Heaven On Earth

“Utopia,” which Thomas More coined in his book by the same name in 1516, literally means “no place.” In the centuries since, many prophets of this or that persuasion have painted utopian visions. But what makes present-day visions of techno-utopia different is the path for how we get there.

In the past, the path to Utopia called for an impossible leap in human moral behavior. Suddenly, we’ll all follow the Golden Rule, and do unto others as we would have done unto us. Yeah, right.

But today’s path to techno-Utopia calls for a leap in science and technology—in cybernetics, in artificial intelligence, in biotechnology, in genetic manipulation, in molecular manufacturing. And that does seem possible…doesn’t it? Put it this way: Given how far our technology has come since the cracking of the atom, who among us is willing to say that these breakthroughs are impossible?

And if they are not impossible, then Utopia is attainable. Don’t we then have a duty—a moral duty—to strive for it?

This argument is so persuasive today because we have been persuading ourselves of it for so long. Persuasive—and pervasive. It is the basic moral case being made by a swelling number of tech-driven save-the-world projects, the starkest example of which is Singularity University.

I find it so compelling, that I don’t quite know what to write in rebuttal…

Gods—Or Slaves?

Until I recall some the wisdom Hannah Arendt, or Zygmunt Bauman, or remember my earlier conversation with Ian, and remind myself that technology never yields progress by itself. Technology cannot fix our moral and social failings, because those same failings are embedded within our technologies. They spread with our technologies. Our newest technology, A.I. (which learns our past behaviors in order to repeat them), is also the plainest proof of this basic truth. More technology will never be the silver-bullet solution to the problems that technology has helped create.

And so we urgently need to delve into this deepest naïveté of our modern mindset, this belief that technological change is a good thing.

How might we corrupt our techno-innocence?

One thing that should leap out from my brief history of the techno-optimistic narrative is that most of the narrators have been men. I don’t have a good enough grasp of gender issues to do more than point out this fact, but that right there should prompt some deep conversations. Question: Which values are embedded in, and which values are excluded from, tech-driven visions of human progress? (E.g., Is artificial enhancement an expression of humanity’s natural striving-against-limits, or a negation of human nature?)

As a political scientist, I can’t help but ask the question: Whose interests are served and whose are dismissed when technology is given pride of place as the primary engine of our common future? Obviously, tech entrepreneurs and investors do well: Blessed are the tech innovators, for they are the agents of human progress. At the same time: Accursed are the regulators, for they know not what they govern.

Yuval slips into this kind of thinking in his Homo Deus, when he writes:

Precisely because technology is now moving so fast, and parliaments and dictators alike are overwhelmed by data they cannot process quickly enough, present-day politicians are thinking on a far smaller scale than their predecessors a century ago. Consequently, in the early twenty-first century politics is bereft of grand visions. Government has become mere administration. It manages the country, but it no longer leads it.

But is it really the speed of technological change, is it the scale of data, that limits the vision of present-day politicians? Or is it the popular faith that any political vision must accommodate the priorities of technological innovators? For all its emerging threats to our democracy, social media must be enabled. For all its potential dangers, research into artificial intelligence must charge ahead. Wait, but—why?

Why!?! What an ignorant question!

And while we’re on the topic of whose interests are being served/smothered, we should ask: whose science and technology is being advanced, and whose is being dismissed? “Science and technology” is not an autonomous force. It does not have its own momentum, or direction. We determine those things.

The original social contract between science and society proposed by Vannevar Bush in 1945 saw universities and labs doing pure research for its own sake, guided by human curiosity and creativity. The private sector, guided by the profit motive, would then sift through that rich endeavor to find good ideas ready to be turned into useful tools for the rest of us. But the reality today is an ever closer cooperation between academia and business. Private profit is crowding out public curiosity. Research that promises big payoffs within today’s economic system usually takes precedence over research that might usher in tomorrow’s…

Homo Humilitas

All predictions about the future reflect the values and norms of the present.

So when Yuval drops a rhetorical question like, Will our powers of science and technology one day make us gods?, it’s time to ask ourselves tough questions about the value we place on technology today, and what other values we are willing to sacrifice on its altar.

The irony is that, just by asking ourselves his question—by elevating science and technology above other engines of progress, above other values—we diminish what humanity is and narrow humanity’s future to a subset of what might be.

It is as if we’ve traded in the really big questions that define and drive progress—“What is human life?” and “What should human life be?”—for the bystander-ish “What does technology have in store for our future?”

That’s why I suspect that the more we debate the relationship between technology and progress, the more actual progress we will end up making.

I think we will remind ourselves of the other big engines of progress at society’s disposal, like “law” and “culture” and “religion,” which are no less but no more value-laden than “technology.”

I think we will remind ourselves of other values, some of which might easily take steps backward as technology “progresses”. E.g., As our powers to enhance the human body with technology grow stronger, will our fragile, but fundamental, belief in the intrinsic dignity of every human person weaken?

I think we will become less timid and more confident about our capacity to navigate the now. Within the techno-utopian narrative, we may feel silenced by our own ignorance. Outside of that narrative, we may feel emboldened by our wisdom, our experience, our settled notions of right and wrong.

I think we will recall, and draw strength from, examples of when society shaped technology, and not the other way around. In the last century, no technology enjoyed more hype than atomic energy. And yet just look at the diversity of ways in which different cultures incorporated it. In the US, where the nuclear conversation revolves around liability, no new nuclear plant has opened since the Three Mile Island accident of 1979. In Germany, where the conversation revolves around citizens’ rights to participate in public risk-taking, the decision was taken in 2011 to close down all 17 reactors in the country—in direct response to the Fukushima meltdown in Japan. Meanwhile in South Korea, whose capital Seoul is only 700 miles from Fukushima, popular support for the country’s 23 reactors remained strong. (For South Koreans, nuclear technology has been a symbol of the nation’s independence.)

And I think we will develop more confidence to push back against monolithic techno-visions of “the good.” Wasn’t the whole idea of modernity supposed to be, as Nietzsche put it, “God is dead”—and therefore we are free to pursue a radical variety of “goods”? A variety that respects and reflects cultural differences, gender differences, ideological differences… Having done the hard work to kill one idea of perfection, why would we now all fall in line behind another?

Four Little Questions To Reclaim The Future

None of the above is to deny that technology is a profound part of our lives. It has been, since the first stone chisel. But we hold the stone in our hands. It does not hold us.

Or does it? After decades of techno-evangelism, we risk slipping into the belief that if we can do it, we should do it.

Recent headlines (of cybercrime, social media manipulation, hacked public infrastructure and driverless car accidents) are shaking that naïveté. We understand, more and more, that we need to re-separate, and re-arrange, these two questions, in order to create some space for ethics and politics to return. What should we do? Here, morality and society must be heard. What can we do? Here, science and technology should answer.

Preferably in that order.

It’s hard to imagine that we’ll get there. But I think: the more we debate the relationship between technology and progress, the more easily we will find our rightful voice to demand of any techno-shaman who intends to alter society:

  1. What is your purpose?
  2. Who will be hurt?
  3. Who will benefit?
  4. How will we know?

By asking these four simple questions, consistently and persistently, we can re-inject humility into our technological strivings. We can widen participation in setting technology’s direction. And we can recreate genuinely shared visions of the future.