Part of what’s needed is understanding what’s in the image that accompanies this pattern. This quadrant model is called “the Cynefin framework”. It looks at four different kinds of situation or challenge and the kind of approach that is appropriate for each one. These four kinds of situations are: simple, complicated, complex and chaotic.
When a situation is “simple”, that means that its dynamics are very linear. There are one or two things happening here. The causes and effects are direct and obvious. This kind of thing is especially true to the extent a situation is mechanical. You can realistically say, “This is the way to do it. If you want to fry an egg, here is how to do it.”
In a “complicated” situation, there are a lot of simple dynamics woven together. It’s like: “How do you get a rocket to the moon?” There is nothing particularly unknowable about that (or at least that’s what a lot of people thought!). If you study it well, you can find out what’s needed. There are a lot of different pieces to the puzzle, but if you understand them and do all those simple things in a coordinated way – since it’s a mechanical system – you can get it done. And so we got a rocket to the moon.
A “complex” situation is where the dynamics are not linear. They are feeding back into each other. Everything is interrelated, there is constant shifting going on, it’s not totally predictable or controllable. So how do you respond to such a situation? Well, first of all you need to understand that what will happen next may very well not be what you expect. There are patterns, but you need to be constantly searching for them, alert to them, scanning the environment for them – and being ready to change. It’s not as if once you learn a pattern, you’ve got control of things. The useful things you learn help you dance with what’s happening, enabling you to be more or less successfully flexible and able to observe and learn.
Then there’s “chaotic” situations. Here there are no particular patterns at all! Everything is novel, everything is new all the time. Ideally, you’d have (or find) somebody who knows how to handle that kind of situation and you follow that person. The dark side of that dynamic is when things get chaotic, people look to a strong leader that will tell them what to do. Often that strong leader is more interested in manipulating people and getting personal power and benefit from doing so than in the furthering welfare of the whole. Often they will create chaos – or at least the perception of chaos – in order to gain more power.
As our collective power and understanding expand, we’re slowly becoming aware of a very important reality: Most of life is complex. Most of nature is complex. The societies, the natural ecosystems, the atmospheric dynamics, these are all constantly changing and shifting and internally adapting. We need to understand the principles of interacting in a relational way with that reality rather than trying to control it. That’s why this Cynefin framework is such a potent form of understanding and supporting Prudent Progress. This is why we have it on the card as part of the image.
The people in the picture are there to communicate a sense of testing out how to grow this plant differently, or testing some genetic engineering or some new agricultural method, and its all being done inside a greenhouse, away from the broader environment. There’s a sense of a contained test going on. So when we understand we can’t necessarily predict and control what’s going happen in a complex system, we rein in our ambitions. It’s like we can’t just get from A to B. There are limits to the relevance of A-to-B dynamics in complex systems. They don’t respond to linear interventions the ways we expect. They are not mechanical. You can fix mechanical systems like your car – although the more complex and computerized cars get, the more we find emergent, unexpected phenomena showing up. Previously mechanical systems that increasingly use computers become increasingly complex and start to mimic the complexity of living and natural systems. In that way they are evolving from complicated to complex and have to be engaged with differently.
So we are called to rein in our ambitions – because they are largely linear A-to-B ambitions – and to rein in our planning – which is also usually A-to-B (like let’s do this first, and this second, and this third…). We’re called to move towards more responsive, innovative, in-the-moment ways of dealing with complex systems – out of a sense of deep understanding. We need to understand what the nature of our responsiveness is – and what the dynamics of these systems are – that we are dancing with. We need to understand some basic principles – but they’re not A-to-B principles. Their principles about how to dance creatively with a changing scene.
This pattern language is a really interesting example of what this pattern is talking about. Each one of these patterns is something to understand as a dynamic in a living system of a wise democracy. It doesn’t tell you specifically what to do. It says this KIND of thing is going on where you have a wise democracy. To the extent it’s not going on, you don’t have a wise democracy. So think about that while you are working on creating your wise democracy.
So this pattern says: “Honestly consider possibilities imaginatively first.” That is one of the amazing powers of intelligence and imagination that you can do trial runs, you can do tests and experiments IN YOUR MIND. The consequences of doing experiments in your mind are usually considerably less than the consequences of doing them out in the world. There are people who are really good at it. Einstein has been an archetypal example of this. Einstein figured out relativity purely through thought experiments and mathematics. Since relativity has been subjected to test over and over and over and over again, it has proved to be what reality behaves like at the scale and in the domains that it was designed for. It was an incredible intellectual achievement, designed at first more from imagination than observation.
So this pattern is saying: If we’re going to make progress, we should start in the imagination and with intelligence. And, using whatever we can bring to our understanding of complex systems, we should sense “if we do this, what is likely to happen?”. And if someone objects to what we propose, we say “Hey you over there who has doubts about this! Come on over into the conversation and bring us your doubts so we can think seriously about them.”
Part of what’s happening in our technology-addicted culture is that when anybody raises doubts about a new technology, there’s an effort to shut them up so we can “make progress”. “We can now inject this nanobot into a patient’s body that will kill their cancer cells! Yay!!” But once you have that technology, somebody could program it to attack heart cells. It is like “you have let the genie out of bottle”. In Arabic folklore, the genie is a magical spirit that you can summon from a bottle or an oil lamp to grant your wishes – but you might have trouble getting him back in, and he might do mischief. This story has some deep wisdom embedded in it. Technological knowledge has that capacity to not be easy to get back into the bottle.
[Note that I learned some things about this metaphor after I gave this talk: Originally genies or jinn were simply supernatural spirits in Arabic folk literature (reference
). The idea of them living in a bottle or oil lamp apparently came from the Aladdin story that was added to “The Arabian Nights” classic by a French translator (reference
). There’s nothing in that story to indicate it was hard to get the genie back into the bottle. I can’t find where that last idea came from, but the meme is widely recognized (reference
). Perhaps better famous metaphorical narratives about unrestrained technology are “The Sorcerer’s Apprentice
” and “Frankenstein
”. Or maybe I’m being old-fashioned with those examples. There’s plenty of precautionary tales and technological dystopias in science fiction, such as here
, and here
… – Tom Atlee }
So the Prudent Progress pattern advocates imagination first – to look at all sides – and then cautious real-world tests that don’t risk prematurely letting an innovation out into the real world environment. Then, after it passes those tests, what’s the next step? – Not the next step to rush it to market to make a profit; not the next step to generate its super-special benefits, but the next step needed so that it will not ultimately prove more damaging than beneficial.
This approach seriously prioritizes risks. That’s what prudence is. And it is the exact opposite of rapid forward motion, jumping to conclusions, being addicted to the high you get from your visions of what you can do. It’s “next step thinking”. And while you’re doing that, develop the resilience of the systems in which you are doing your experiments, and into which you are going to introduce the technology.
Resilience means that if there’s a shock to a system, the system can respond and hold itself together. That is part of prudence – having redundancy, having stocks, having things in place so that if there’s a shock, you can weather the shock. That’s what’s going on with resilience.
And pay attention to weak but significant signals: Overwhelmingly, our society ignores weak signals. It waits until the situation develops almost into a catastrophe. You start out with little signals, with little disturbances which, if ignored, become a problem which, if ignored, become issues people are arguing over. At which point forces come in to push one side or another to move ahead. But it’s late and we’ve gotten into overreacting, over-responding, over-pushing,… We need to STOP – it’s like what you’re supposed to do when you come to railroad tracks – we need to stop, look and listen. We need to reflect, think, and welcome and attend to any diversity and disturbance that’s going on. That’s a whole other pattern that’s relevant here: Using Diversity and Disturbance Creatively. And we need to recognize that we’re not stopping and reflecting because we’re just scared people. We are doing that because we are being collectively wise.
The last part of this pattern talks about ongoing conscientious review. We seriously look at what’s happening at each stage. We understand that if this innovation or technology has the potential for long-term consequences that don’t show up immediately, we’re going to wait for the long-term tests. We are not going race out and say: “Because our tests showed it created this fabulous effect in six months with no negative consequences, we are going to say this is safe to do!” We are NOT going to do that.
The recognition that there are no guarantees, is part of the wisdom frame of reference: You do ongoing review, and you just take a step and watch that step. And then do another.
We need to avoid the perfect storm of total power and no wisdom. So this pattern is suggesting that on many different levels, in many different domains, we apply these understandings. The fact is, that life for many people was very good with the technologies that we had in the 1950s. There were people who were really happy hundreds of years ago. Yes, there’s benefits to each new step of technology, but we’ve gotten to a point where we have to slow down – to stop, look and listen. Because the collision that could happen could be terminal.