Prudent Progress – going deeper …
This is an edited version of the video on this page.
This pattern is inspired by the situation we have put ourselves in through our technological power and our cognitive limits which – mixed in with the ways we handle our society – is a very toxic brew.
A poignant joke says: When you’re standing at the edge of a cliff, the next step is not progress. And the Persian poet Rumi said in one of his poems:
Sit, be still and listen.
Because you are drunk
and this is the edge of the roof.
A lot of people share this sense that we are at the edge of major catastrophe largely of our own making – and that stepping back from that is hard. Yet the forces of progress, the advocates of always moving ahead, declare that technological progress is inevitable. We will always have new knowledge and new ways to do things. There is much that is vibrant and inspiring about that vision but, unfortunately, it is manifesting in many toxic ways in our larger civilizational predicament and in our culture. More and more people are now talking about civilizational collapse and human extinction. These are very big issues emerging around our brilliantly expanding collective power.
This pattern is saying: Okay, let’s value progress, but let’s step back. Prudence is caution. “Know before you go.” Before we take the next big potentially dangerous step, let’s stop and think and do some checking to see if we are on the right track. There is a classic articulation of that called “The Precautionary Principle”. It is a philosophical scientific principle that says: Any particular technology should not be developed – or at least not released into the environment and used – until it has been proven to be non-toxic and not dangerous. This is a very high standard. This is like a technology is guilty until proven innocent, instead of innocent until proven guilty. It’s based on the recognition that with slight changes in a complex system, massive disaster could happen – what’s called “the butterfly effect”.
In 2000, Bill Joy, a tech guru who was one of the co-creators of Java, wrote an article entitled “Why the Future Doesn’t Need Us”. In it he talked about how, in the next few decades (probably unpredictably), through developments in nanotechnology, biotechnology, robotics, and computing power, we will develop the capacity to create self-replicating entities – viruses, nanorobots, etc. – able to harm us or the environment to such an extent that human extinction will become inevitable. These self-replicating entities could be toxic or consume things we need on a massive scale. The most important feature of this dire prediction is that we will generate the capacity for individuals or small groups to create such entities, on purpose or by accident. It won’t be limited to big organizations and governments.
The breakthrough development of Crispr a few years ago was only one of a number of new developments that make humans even more powerful in ways that are straight out of Bill Joy’s prediction. Crispr and its cousins make genetic engineering really easy. Somebody with a basic understanding of college-level biology and $10,000 of equipment can start fiddling with microorganisms and create something by accident or because they are insane or they have dire aims for humanity – or simply because they didn’t notice or seriously consider a “side effect” of their otherwise well-intentioned innovation. Since we’re talking about a self-replicating entity, once you let it out into the environment, it will self-replicate.
Anybody being able to create self-replicating entities of any kind is an obvious formula for losing all control of humanity’s destiny.
This issue is, of course, very hot. But what do we do? Many would say: “But you can’t stop science! Science is really important! It produces medical breakthroughs! We can feed more people!” and so on. This is all true, but there’s a bigger reality involved here.