Pattern #67

Pattern Card

Click to enlarge or download Pattern Card.

Prudent Progress Card

Buy or Download

To buy or download the complete Wise Democracy Card Deck use the Buy & Download button.

Comments

We invite your participation in evolving this pattern language with us. Use the comment section at the bottom of this page to comment on its contents or to share related ideas and resources.

Prudent Progress

Prudent Progress Symbol

Pattern Heart

Given our limited capacity to predict and control the future in complex systems, wise action involves taking our ambitions and planning with a grain of salt. So honestly consider possibilities — imaginatively first and then with cautious real-world tests, next-step thinking, development of resilience, attention to weak but significant signals, and ongoing conscientious review.

  • What are appropriate forms of – or alternatives to – planning when we’re dealing with complex adaptive systems that cannot be controlled or predicted?
  • What effective forms of low-risk, high-learning prototyping can we come up with?
  • How can we respond appropriately when we don’t know what’s going to happen next?
  • What should we be mindful of as we take initiatives into complex systems with uncertain futures?
  • What systems could be in place to foresee and head off any damage our innovations might cause?
  • What needs to change so important innovations don’t create more problems than they solve?
  • What is the proper role of innovation – especially technological innovation – in a wise, sustainable, regenerative society?
  • How do we ensure that the downsides of particular proposals and technologies are fairly and thoroughly considered along with the upsides?

Prudent Progress – going deeper …

This is an edited version of the video on this page.

This pattern is inspired by the situation we have put ourselves in through our technological power and our cognitive limits which – mixed in with the ways we handle our society – is a very toxic brew.

A poignant joke says: When you’re standing at the edge of a cliff, the next step is not progress. And the Persian poet Rumi said in one of his poems:
Sit, be still and listen.
Because you are drunk
and this is the edge of the roof.

A lot of people share this sense that we are at the edge of major catastrophe largely of our own making – and that stepping back from that is hard. Yet the forces of progress, the advocates of always moving ahead, declare that technological progress is inevitable. We will always have new knowledge and new ways to do things. There is much that is vibrant and inspiring about that vision but, unfortunately, it is manifesting in many toxic ways in our larger civilizational predicament and in our culture. More and more people are now talking about civilizational collapse and human extinction. These are very big issues emerging around our brilliantly expanding collective power.

This pattern is saying: Okay, let’s value progress, but let’s step back. Prudence is caution. “Know before you go.” Before we take the next big potentially dangerous step, let’s stop and think and do some checking to see if we are on the right track. There is a classic articulation of that called “The Precautionary Principle”. It is a philosophical scientific principle that says: Any particular technology should not be developed – or at least not released into the environment and used – until it has been proven to be non-toxic and not dangerous. This is a very high standard. This is like a technology is guilty until proven innocent, instead of innocent until proven guilty. It’s based on the recognition that with slight changes in a complex system, massive disaster could happen – what’s called “the butterfly effect”.

In 2000, Bill Joy, a tech guru who was one of the co-creators of Java, wrote an article entitled “Why the Future Doesn’t Need Us”. In it he talked about how, in the next few decades (probably unpredictably), through developments in nanotechnology, biotechnology, robotics, and computing power, we will develop the capacity to create self-replicating entities – viruses, nanorobots, etc. – able to harm us or the environment to such an extent that human extinction will become inevitable. These self-replicating entities could be toxic or consume things we need on a massive scale. The most important feature of this dire prediction is that we will generate the capacity for individuals or small groups to create such entities, on purpose or by accident. It won’t be limited to big organizations and governments.

The breakthrough development of Crispr a few years ago was only one of a number of new developments that make humans even more powerful in ways that are straight out of Bill Joy’s prediction. Crispr and its cousins make genetic engineering really easy. Somebody with a basic understanding of college-level biology and $10,000 of equipment can start fiddling with microorganisms and create something by accident or because they are insane or they have dire aims for humanity – or simply because they didn’t notice or seriously consider a “side effect” of their otherwise well-intentioned innovation. Since we’re talking about a self-replicating entity, once you let it out into the environment, it will self-replicate.

Anybody being able to create self-replicating entities of any kind is an obvious formula for losing all control of humanity’s destiny.

This issue is, of course, very hot. But what do we do? Many would say: “But you can’t stop science! Science is really important! It produces medical breakthroughs! We can feed more people!” and so on. This is all true, but there’s a bigger reality involved here.

Part of what’s needed is understanding what’s in the image that accompanies this pattern. This quadrant model is called “the Cynefin framework”. It looks at four different kinds of situation or challenge and the kind of approach that is appropriate for each one. These four kinds of situations are: simple, complicated, complex and chaotic.

When a situation is “simple”, that means that its dynamics are very linear. There are one or two things happening here. The causes and effects are direct and obvious. This kind of thing is especially true to the extent a situation is mechanical. You can realistically say, “This is the way to do it. If you want to fry an egg, here is how to do it.”

In a “complicated” situation, there are a lot of simple dynamics woven together. It’s like: “How do you get a rocket to the moon?” There is nothing particularly unknowable about that (or at least that’s what a lot of people thought!). If you study it well, you can find out what’s needed. There are a lot of different pieces to the puzzle, but if you understand them and do all those simple things in a coordinated way – since it’s a mechanical system – you can get it done. And so we got a rocket to the moon.

A “complex” situation is where the dynamics are not linear. They are feeding back into each other. Everything is interrelated, there is constant shifting going on, it’s not totally predictable or controllable. So how do you respond to such a situation? Well, first of all you need to understand that what will happen next may very well not be what you expect. There are patterns, but you need to be constantly searching for them, alert to them, scanning the environment for them – and being ready to change. It’s not as if once you learn a pattern, you’ve got control of things. The useful things you learn help you dance with what’s happening, enabling you to be more or less successfully flexible and able to observe and learn.

Then there’s “chaotic” situations. Here there are no particular patterns at all! Everything is novel, everything is new all the time. Ideally, you’d have (or find) somebody who knows how to handle that kind of situation and you follow that person. The dark side of that dynamic is when things get chaotic, people look to a strong leader that will tell them what to do. Often that strong leader is more interested in manipulating people and getting personal power and benefit from doing so than in the furthering welfare of the whole. Often they will create chaos – or at least the perception of chaos – in order to gain more power.

As our collective power and understanding expand, we’re slowly becoming aware of a very important reality: Most of life is complex. Most of nature is complex. The societies, the natural ecosystems, the atmospheric dynamics, these are all constantly changing and shifting and internally adapting. We need to understand the principles of interacting in a relational way with that reality rather than trying to control it. That’s why this Cynefin framework is such a potent form of understanding and supporting Prudent Progress. This is why we have it on the card as part of the image.

The people in the picture are there to communicate a sense of testing out how to grow this plant differently, or testing some genetic engineering or some new agricultural method, and its all being done inside a greenhouse, away from the broader environment. There’s a sense of a contained test going on. So when we understand we can’t necessarily predict and control what’s going happen in a complex system, we rein in our ambitions. It’s like we can’t just get from A to B. There are limits to the relevance of A-to-B dynamics in complex systems. They don’t respond to linear interventions the ways we expect. They are not mechanical. You can fix mechanical systems like your car – although the more complex and computerized cars get, the more we find emergent, unexpected phenomena showing up. Previously mechanical systems that increasingly use computers become increasingly complex and start to mimic the complexity of living and natural systems. In that way they are evolving from complicated to complex and have to be engaged with differently.

So we are called to rein in our ambitions – because they are largely linear A-to-B ambitions – and to rein in our planning – which is also usually A-to-B (like let’s do this first, and this second, and this third…). We’re called to move towards more responsive, innovative, in-the-moment ways of dealing with complex systems – out of a sense of deep understanding. We need to understand what the nature of our responsiveness is – and what the dynamics of these systems are – that we are dancing with. We need to understand some basic principles – but they’re not A-to-B principles. Their principles about how to dance creatively with a changing scene.

This pattern language is a really interesting example of what this pattern is talking about. Each one of these patterns is something to understand as a dynamic in a living system of a wise democracy. It doesn’t tell you specifically what to do. It says this KIND of thing is going on where you have a wise democracy. To the extent it’s not going on, you don’t have a wise democracy. So think about that while you are working on creating your wise democracy.

So this pattern says: “Honestly consider possibilities imaginatively first.” That is one of the amazing powers of intelligence and imagination that you can do trial runs, you can do tests and experiments IN YOUR MIND. The consequences of doing experiments in your mind are usually considerably less than the consequences of doing them out in the world. There are people who are really good at it. Einstein has been an archetypal example of this. Einstein figured out relativity purely through thought experiments and mathematics. Since relativity has been subjected to test over and over and over and over again, it has proved to be what reality behaves like at the scale and in the domains that it was designed for. It was an incredible intellectual achievement, designed at first more from imagination than observation.

So this pattern is saying: If we’re going to make progress, we should start in the imagination and with intelligence. And, using whatever we can bring to our understanding of complex systems, we should sense “if we do this, what is likely to happen?”. And if someone objects to what we propose, we say “Hey you over there who has doubts about this! Come on over into the conversation and bring us your doubts so we can think seriously about them.”

Part of what’s happening in our technology-addicted culture is that when anybody raises doubts about a new technology, there’s an effort to shut them up so we can “make progress”. “We can now inject this nanobot into a patient’s body that will kill their cancer cells! Yay!!” But once you have that technology, somebody could program it to attack heart cells. It is like “you have let the genie out of bottle”. In Arabic folklore, the genie is a magical spirit that you can summon from a bottle or an oil lamp to grant your wishes – but you might have trouble getting him back in, and he might do mischief. This story has some deep wisdom embedded in it. Technological knowledge has that capacity to not be easy to get back into the bottle.

[Note that I learned some things about this metaphor after I gave this talk: Originally genies or jinn were simply supernatural spirits in Arabic folk literature (reference). The idea of them living in a bottle or oil lamp apparently came from the Aladdin story that was added to “The Arabian Nights” classic by a French translator (reference). There’s nothing in that story to indicate it was hard to get the genie back into the bottle. I can’t find where that last idea came from, but the meme is widely recognized (reference). Perhaps better famous metaphorical narratives about unrestrained technology are “The Sorcerer’s Apprentice” and “Frankenstein”.  Or maybe I’m being old-fashioned with those examples. There’s plenty of precautionary tales and technological dystopias in science fiction, such as here, here, here, and here… – Tom Atlee }

So the Prudent Progress pattern advocates imagination first – to look at all sides – and then cautious real-world tests that don’t risk prematurely letting an innovation out into the real world environment. Then, after it passes those tests, what’s the next step? – Not the next step to rush it to market to make a profit; not the next step to generate its super-special benefits, but the next step needed so that it will not ultimately prove more damaging than beneficial.

This approach seriously prioritizes risks. That’s what prudence is. And it is the exact opposite of rapid forward motion, jumping to conclusions, being addicted to the high you get from your visions of what you can do. It’s “next step thinking”. And while you’re doing that, develop the resilience of the systems in which you are doing your experiments, and into which you are going to introduce the technology.

Resilience means that if there’s a shock to a system, the system can respond and hold itself together. That is part of prudence – having redundancy, having stocks, having things in place so that if there’s a shock, you can weather the shock. That’s what’s going on with resilience.

And pay attention to weak but significant signals: Overwhelmingly, our society ignores weak signals. It waits until the situation develops almost into a catastrophe. You start out with little signals, with little disturbances which, if ignored, become a problem which, if ignored, become issues people are arguing over. At which point forces come in to push one side or another to move ahead. But it’s late and we’ve gotten into overreacting, over-responding, over-pushing,… We need to STOP – it’s like what you’re supposed to do when you come to railroad tracks – we need to stop, look and listen. We need to reflect, think, and welcome and attend to any diversity and disturbance that’s going on. That’s a whole other pattern that’s relevant here: Using Diversity and Disturbance Creatively. And we need to recognize that we’re not stopping and reflecting because we’re just scared people. We are doing that because we are being collectively wise.

The last part of this pattern talks about ongoing conscientious review. We seriously look at what’s happening at each stage. We understand that if this innovation or technology has the potential for long-term consequences that don’t show up immediately, we’re going to wait for the long-term tests. We are not going race out and say: “Because our tests showed it created this fabulous effect in six months with no negative consequences, we are going to say this is safe to do!” We are NOT going to do that.

The recognition that there are no guarantees, is part of the wisdom frame of reference: You do ongoing review, and you just take a step and watch that step. And then do another.

We need to avoid the perfect storm of total power and no wisdom. So this pattern is suggesting that on many different levels, in many different domains, we apply these understandings. The fact is, that life for many people was very good with the technologies that we had in the 1950s. There were people who were really happy hundreds of years ago. Yes, there’s benefits to each new step of technology, but we’ve gotten to a point where we have to slow down – to stop, look and listen. Because the collision that could happen could be terminal.

Video Introduction (19 min)

Examples and Resources

A key example of monitoring appropriate innovation is the precautionary principle. It says that technology should not be applied in any broad or potentially risky way until it is proven benign. The precautionary principle is an extremely conservative one, very different from the progressive principle that says we are and always should be developing and advancing. Everything is up and up and up all the time which is our civilization’s bias at the moment. So the precautionary principle is understandably resisted by ambitious technologists. And it’s actually very hard to apply in a broadly collective way. If the U.S. adopted the precautionary principle, what about the Chinese, what about a terrorist network? How do you get the precautionary principle applied everywhere?

That question should not be seen as a rhetorical question. It should be seen as a real question that demands some creative answers: “Okay, how do we do this clearly necessary thing?”

Full Cost Accounting is another one of the patterns, very relevant here. Let’s not just look at the upsides of our developing technologies. We have this brilliant ability to make these tiny robots which can go around inside us and kill cancer cells. Okay, so if you can do that, you should be able to create tiny little robots that go around inside us and kill brain cells or heart cells. The wrong person having this technology would have very troubling capacities in their hands. So do we want to go there and look at the full cost accounting when it comes to any new technology? If we thought in terms of full cost accounting, I suspect we would more often apply the precautionary principle.

An existing protocol which is very much along these lines – albeit quite mildly, given the way it is usually applied – is environmental impact statements (EIS). Essentially an EIS asks “If you are going to do this new development project or create this new technology, what is the environmental impact of that going to be?” Unfortunately, it’s a corrupted system in practice. But the idea behind it is much in line with this particular pattern.