The futurist Ian Pearson, in his fascinating blog The More Accurate Guide to the Future, has recently directed my attention to a new report by Bloomberg Business. Just two days ago, Bloomberg Business published a wonderful short report that identifies ten of the worst-case scenarios for 2016. In order to write the report, Bloomberg’s staff has asked –
“…dozens of former and current diplomats, geopolitical strategists, security consultants, and economists to identify the possible worst-case scenarios, based on current global conflicts, that concern them most heading into 2016.”
I really love this approach, since currently many futurists – particularly the technology-oriented ones – are focusing mainly on all the good that will come to us soon enough. Ray Kurzweil and Tony Seba (in his book Clean Disruption) are forecasting a future with abundant energy; Peter Diamandis believes we are about to experience a new consumerism wave by “the rising billion” from the developing world; Aubrey De-Grey forecasts that we’ll uncover means to stop aging in the foreseeable future. And I tend to agree with them all, at least generally: humanity is rapidly becoming more technologically advanced and more efficient. If these upward trends will continue, we will experience an abundance of resources and a life quality that far surpasses that of our ancestors.
But what if it all goes wrong?
When analyzing the trends of the present, we often tend to ignore the potential catastrophes, the disasters, and the irregularities and ‘breaking points’ that could occur. Or rather, we acknowledge that such irregularities could happen, but we often attempt to focus on the good instead of the bad. If there’s one thing that human beings love, after all, it’s feeling in control – and unexpected events show us the truth about reality: that much of it is out of our hands.
Bloomberg is taking the opposite approach with the current report (more of a short article, really): they have collected ten of the worst-case scenarios that could still conceivably happen, and have tried to understand how they could come about, and what their consequences would be.
The scenarios range widely in the areas they cover, from Putin sidelining America, to Israel attacking Iran’s nuclear facilities, and down to Trump winning the presidential elections in the United States. There’s even mention of climate change heating up, and the impact harsh winters and deadly summers would have on the world.
Strangely enough, the list includes only one scenario dealing with technologies: namely, banks being hit by a massive cyber-attack. In that aspect, I think Bloomberg are shining a light on a very large hole in geopolitical and social forecasting: the fact that technology-oriented futurists are almost never included in such discussions. Their ideas are usually far too bizarre and alienating for the silver-haired generals, retired diplomats and senior consultants who are involved in those discussions. And yet, technologies are a major driving force changing the world. How could we keep them aside?
Technological Worse-Case Scenarios
Here are a few of my own worse-case scenarios for 2016, revolving around technological breakthroughs. I’ve tried to stick to the present as much as possible, so there are no scientific breakthroughs in this list (it’s impossible to forecast those), and no “cure to aging” or “abundant energy” in 2016. That said, quite a lot of horrible stuff could happen with technologies. Such as –
Proliferation of 3D-printed firearms: a single proficient designer could come up with a new design for 3D-printed firearms that will reach efficiency level comparable to that of mass-manufactured weapons. The design will spread like wildfire through peer-to-peer services, and will lead to complete overhaul of the firearm registration protocols in many countries.
First pathogen created by CRISPR technology: biology enthusiasts are now using CRISPR technology – a genetic engineering method so efficient and powerful that ten years ago it would’ve been considered the stuff of science fiction. It’s incredibly easy – at least compared to the past – to genetically manipulate bacteria and viruses using this technology. My worst case scenario in this case is that one bright teenager with the right tools at his hands will create a new pathogen, release it to the environment and worse – brag about it online. Even if that pathogen will prove to be relatively harmless, the mass scare that will follow will stop research in genetic engineering laboratories around the world, and create panic about Do-It-Yourself enthusiasts.
A major, globe-spanning A. disaster: whether it’s due to hacking or to simple programming mistake, an important A.I. will malfunction. Maybe it will be one – or several – of the algorithms currently trading at stock markets, largely autonomously since they’re conducting a new deal every 740 nanoseconds. No human being can follow their deals on the spot. A previous disaster in that front has already led in 2012 to one algorithm operated by Knight Capital, purchasing stocks at inflated costs totaling $7 billion – in just 45 minutes. The stock market survived (even if Knight Capital’s stock did not), but what would happen if a few algorithms go out of order at the same time, or in response to one another? That could easily happen in 2016.
First implant virus: implants like cardiac pacemakers, or external implants like insulin pumps, can be hacked relatively easily. They do not pack much in the way of security, since they need to be as small and energy efficient as possible. In many cases they are also relying on wireless connection with the external environment. In my worst-case scenario for 2016, a terrorist would manage to hack a pacemaker and create a virus that would spread from one pacemaker to another by relying on wireless communication between the devices. Finally, at a certain date – maybe September 11? – the virus would disable all pacemakers at the same time, or make them send a burst of electricity through the patient’s heart, essentially sending them into a cardiac arrest.
This blog post is not meant to create panic or mass hysteria, but to highlight some of the worst-case scenarios in the technological arena. There are many other possible worst-case scenarios, and Ian Perarson details a few others in his blog post. My purpose in detailing these is simple: we can’t ignore such scenarios, or keep on living our lives with the assumption that “everything is gonna be alright”. We need to plan ahead and consider worst-case scenarios to be better prepared for the future.
Do you have ideas for your own technological worst-case scenarios for the year 2016? Write them down in the comments section!
In this post we’ll embark on a journey back in time, to the year 2000, when you were young and eager students. You’re sitting in a lecture given by a bald and handsome futurist. He’s promising to you that within 15 years, i.e. in the year 2015, the exponential growth in computational capabilities will ensure that you will be able to hold a super-computer in your hands.
“Yeah, right,” a smart-looking student sniggers loudly, “and what will we do with it?”
The futurist explains that the future you will watch movies, and hear music with that tiny computer. You exchange bewildered looks with your friends. You all find that difficult to believe in – how can you store large movies on such a small computer? The futurist explains that another trend – that of exponential growth in data storage – will mean that your hand-held super-computer will also store tens of thousands of megabytes.
You see some people in the audience rolling their eyes – promises, promises! Yet you are willing to keep on listening. Of course, the futurist then completely jumps off the cliff of rationality, and promises that in 15 years, everyone will enjoy wireless connectivity almost everywhere, at a speed of tens of megabytes per second.
“That makes no sense.” The smart student laughs again. “Who will ever need such a wireless network? Almost nobody has laptop computers anyway!”
The futurist reminds you that everyone is going to carry super-computers on their bodies in the future. The heckler laughs again, loudly.
The Failure of Segregation
I assume you realize the point by now. The failure demonstrated in this exchange is what I call The Failure of Segregation. It is an incredibly common failure, stemming from our need to focus on only a single trend, and missing the combined and cumulative impacts of two, three or even ten trends at the same time.
In the example above, the forecast made by the futurist would not have been reasonable if only one trend was analyzed. Who needs a superfast Wi-Fi if there aren’t advanced laptops and smartphones to use it? Almost nobody. So from a rational point of view, there’s no reason to invest in such a wireless network. It is only when you consider three trends together – exponential growth in computational capabilities, data storage and wireless network – that you can understand the future.
Every product we enjoy today, is the result of several trends coming into fruition together. Facebook, for example, would not have been nearly as successful if not for these trends –
Exponential growth in computational capabilities, so that nearly everyone has a personal computer.
Miniaturization and mobilization of computers into smartphones.
Exponential improvement of digital cameras, so that every smartphone has a camera today.
Cable internet everywhere.
Wireless internet (Wi-Fi) everywhere.
Cellular internet connections provided by the cellular phone companies.
GPS receiver in every smartphone.
The social trend of people using online social networks.
These are only eight trends, but I’m sure there are many others standing behind Facebook’s success. Only by looking at all eight trends could we have hoped to forecast the future accurately.
Unfortunately, it’s not that easy to look into all the possible trends at the same time.
A Problem of Complexity
Let’s say that you are now aware of the Failure of Segregation, and so you try to contemplate all of the technological trends together, to obtain a more accurate image of the future. If you try to consider just three technological trends (A, B and C) and the ways they could work together to create new products, you would have four possible results: AB, AC, BC and ABC. That’s not so bad, is it?
However, if you add just one more technological trend to the mix, you’ll find yourself with eleven possible results. Do the calculations yourself if you don’t believe me. The formula is relatively simple, with N being the number of trends you’re considering, and X being the number of possible combinations of trends –
It’s obvious that for just ten technological trends, there are about a thousand different ways to combine them together. Considering twenty trends will cause you a major headache, and will bring the number of possible combinations up to one million. Add just ten more trends, and you get a billion possible combinations.
To give you an understanding of the complexity of the task on hand, the international consulting firm Gartner has taken the effort to map 37 of the most highly expected technological trends in their Gartner’s 2015 Hype Cycle. I’ll let you do the calculations yourself for the number of combinations stemming from all of these trends.
The problem, of course, becomes even more complicated once you realize you can combine the same two, three or ten technologies to achieve different results. Smart robots (trend A) enjoying machine learning capabilities (trend B) could be used as autonomous cars, or they could be used to teach pupils in class. And of course, throughout this process we pretend to know that said trends will be continue just the way we expect them to – and trends rarely do that.
What you should be realizing by now is that the opposite of the Failure of Segregation is the Failure of Over-Aggregation: trying to look at tens of trends at the same time, even though the human brain cannot hold such an immense variety of resultant combinations and solutions.
So what can we do?
Dancing between Failures
Sadly, there’s no golden rule or a simple solution to these failures. The important thing is to be aware of their existence, so that discussions about the future cannot be oversimplified into considering just one trend, detached from the others.
Professional futurists use a variety of methods, including scenario development, general morphological analysis and causal layered analysis to analyze the different trends and attempt to recombine them into different solutions for the future. These methodologies all have their place, and I’ll explain them and their use in other posts in the future. However, for now it should be clear that the incredibly large number of possible solutions makes it impossible to consider only one future with any kind of certainty.
In some of the future posts in this series, I’ll delve deeper into the various methodologies designed to counter the two failures. It’s going to be interesting!
I often imagine myself meeting James Clark Maxwell, one of the greatest physicists in the history of the Earth, and the one indirectly responsible for almost all the machinery we’re using today – from radio to television sets and even power plants. He was recognized as a genius in his own time, and became a professor at the age of 25 years old. His research resulted in Maxwell’s Equations, which describe the connection between electric and magnetic fields. Every electronic device in existence today, and practically all the power stations transmitting electricity to billions of souls worldwide – they all owe their existence to Maxwell’s genius.
And yet when I approach that towering intellectual of the 19th century in my imagination, and try to tell him about all that has transpired in the 20th century, I find that he does not believe me. That is quite unseemly of him, seeing as he is a figment of my imagination, but when I devote some more thought to the issue, I realize that he has no reason to accept any word that I say. Why should he?
At first I decide to go cautiously with the old boy, and tell him about the X-rays – whose discovery was made in 1895, just 26 years after Maxwell’s death. “Are you talking of light that can go through the human body and chart all the bones in the way?” he asks me incredulously. “That’s impossible!”
And indeed, there is no scientific school in 1879 – Maxwell’s death date – that can support the idea of X-rays.
I decide to jump ahead and skip the theory of relativity, and instead tell him about the atom bomb that demolished Nagasaki and Hiroshima. “Are you trying to tell me that just by banging together two pieces of that chemical which you call Uranium 235, I can release enough energy to level an entire town?” he scoffs. “How gullible do you think I am?”
And once again, I find that I cannot fault him for disbelieving my claims. According to all the scientific knowledge from the 19th century, energy cannot come from nowhere. Maxwell, for all his genius, does not believe me, and could not have forecast these advancements when he was alive. Indeed, no logical forecasters from the 19th century would have made these predictions about the future, since they suffered from the Failure of the Paradigm.
A paradigm, according to Wikipedia, is “a distinct set of concepts or thought patterns”. In this definition one could include theories and even research methods. More to the point, a paradigm describes what can and cannot happen. It sets the boundaries of belief for us, and any forecast that falls outside of these boundaries requires the forecaster to come up with extremely strong evidence to justify it.
Up to our modern times and the advent of science, paradigms changed in a snail-like pace. People in the medieval times largely figured that their children would live and die the same way as they themselves did, as would their grandchildren and grand-grandchildren, up to the day of rapture. But then Science came, with thousands of scientists researching the movement of the planets, the workings of the human body – and the connections between the two. And as they uncovered the mysteries of the universe and the laws that govern our bodies, our planets and our minds, paradigms began to change, and the impossible became possible and plausible.
The discovery of the X-rays is just one example of an unexpected shift in paradigms. Other such shifts include –
Using nuclear energy in reactors and in bombs
Lord Rutherford – the “father of nuclear physics” in the beginning of the 20th century, often denigrated the idea that the energy existing in matter would be utilized by mankind, and yet one year after his death, the fission of the uranium nucleus was discovered.
According to the legend, the great experimental physicist Michael Faraday was paid a visit by governmental representatives back in the 19th century. Faraday showed the delegation his clunky and primitive electric motors – the first of their kind. The representatives were far from impressed, and one of them asked “what could possibly be the use for such toys?” Faraday’s answer (which is probably more urban myth than fact) was simple – “what use is a newborn baby?”
Today, our entire economy and life are based on electronics and on the power obtained from electric power plants – all of them based on Faraday’s innovations, and completely unexpected at his time.
Induced Pluripotent Stem Cells
This paradigm shift has happened just nine years ago. It was believed that biological cells, once they mature, can never ‘go back’ and become young again. Shinya Yamanaka other researchers have turned that belief on its head in 2006, by genetically engineering mature human cells back into youth, turning them into stem cells. That discovery has earned Yamanaka his 2012 Nobel prize.
How Paradigms Advance
It is most illuminating to see how computers have advanced throughout the 20th century, and have constantly shifted from one paradigm to the other along the years. From 1900 to the 1930s, computers were electromechanical in nature: slow and cumbersome constructs with electric switches. As technology progressed and new scientific discoveries were made, computers progressed to using electric relay technology, and then to vacuum tubes.
One of the first and best known computers based on vacuum tubes technology is the ENIAC (Electronic Numerical Integrator and Computer), which weighed 30 tons and used 200 kilowatts of electricity. It could perform 5,000 calculations a second – a task which every smartphone today exceeds without breaking a sweat… since the smartphones are based on new paradigms of transistors and integrated circuits.
At each point in time, if you were to ask most computer scientists whether computers could progress much beyond their current state of the art, the answer would’ve been negative. If the scientists and engineers working on the ENIAC were told about a smartphone, they would’ve been completely baffled. “How can you put so many vacuum tubes into one device?” they would’ve asked. “and where’s the energy to operate them all going to come from? This ‘smartphone’ idea is utter nonsense!”
And indeed, one cannot build a smartphone with vacuum tubes. The entire computing paradigm needed to change in order for this new technology to appear on the world’s stage.
What does the Failure of the Paradigm mean? Essentially what it means is that we cannot reliably forecast a future that is distant enough for a paradigm shift to occur. Once the paradigm changes, all previous limitations and boundaries are absolved, and what happens next is up to grabs.
This insight may sound gloomy, since it makes clear that reliable forecasts are impossible to make a decade or two into the future. And yet, now that we understand our limitations we can consider ways to circumvent them. The solutions I’ll propose for the Failure of the Paradigm are not as comforting as the mythical idea that we can know the future, but if you want to be better prepared for the next paradigm, you should consider employing them.
Solutions for the Failure of the Paradigm
First Solution: Invent the New Paradigm Yourself
The first solution is quite simple: invent the new paradigm yourself, and thus be the one standing on top when the new paradigm takes hold. The only problem is, nobody is quite certain what the next paradigm is going to be. This is the reason why we see the industry giants of today – Google, Facebook, and others – buying companies left-and-right. They’re purchasing drone companies, robotics companies, A.I. companies, and any other idea that looks as if it has a chance to grow into a new and successful paradigm a decade from now. They’re spreading and diversifying their investments, since if even one of these investments leads into the new paradigm, they will be the Big Winners.
Of course, this solution can only work for you if you’re an industry giant, with enough money to spare on many futile directions. If you’re a smaller company, you might consider the second solution instead.
Second Solution: Utilize New Paradigms Quickly
The famous entrepreneur Peter Diamandis often encourages executives to invite small teams of millennials into their factories and companies, and asking them to actively come up with ideas to disrupt the current workings of the company. The millennials – people between 20 to 30 years old – are less bound by ancient paradigms than the people currently working in most companies. Instead, they are living the new paradigms of social media, internet everywhere, constant surveillance and loss of privacy, etc. They can utilize and deploy the new paradigms rapidly, in a way that makes the old paradigms seem antique and useless.
This solution, then, helps executives circumvent the Failure of the Paradigm by adapting to new paradigms as quickly as possible.
Third Solution: Forecast Often, and Read Widely
One of the rules for effective Forecasting, as noted futurist Paul Saffo wrote in Harvard Business Review in 2007, is to forecast often. The proficient forecaster needs to be constantly on the alert for new discoveries and breakthroughs in science and technology – and be prepared to suggest new forecasts accordingly.
The reason behind this rule is that new paradigms rarely (if ever) appear out of the blue. There are always telltale signs, which are called Weak Signals in foresight slang. Such weak signals can be uncovered by searching for new patents, reading Scientific American, Science and Nature to find out about new discoveries, and generally browsing through the New York Times every morning. By so doing, one can be certain to have better hunch about the oncoming of a new paradigm.
Fourth Solution: Read Science Fiction
You knew that one was coming, didn’t you? And for a good reason, too. Many science fiction novels are based on some kind of a paradigm shift occurring, that forces the world to adapt to it. Sometimes it’s the creation of the World Wide Web (which William Gibson speculated about in his science fiction works), or rockets being sent to the moon (As was the case in Jules Verne’s book – “From the Earth to the Moon”), or even dealing with cloning, genetic engineering and bringing back extinct species, as in Michael Crichton’s Jurassic Park.
Science fiction writers consider the possible paradigm shifts and analyze their consequences and implications for the world. Gibson and other science fiction writers understood that if the World Wide Web will be created, then we’ll have to deal with cyber-hackers, with cloud computing, and with mass-democratization of information. In short, they forecast the implications of the new paradigm shift.
Science fiction does not provide us with a solid forecast for the future, then, but it helps us open our minds and escape the Failure of the Paradigm by considering many potential new paradigms at the same time. While there is no research to support this claim, I truly believe that avid science fiction readers are better prepared for new paradigms than everyone else, as they’ve already lived those new paradigms in their minds.
Fifth Solution: Become a Believer
When trying to look far into the future, don’t focus on the obstacles of the present paradigm. Rather if you constantly see that similar obstacles have been overcome in the past (as happened with computers), there is a good reason to assume that the current obstacles will be defeated as well, and a new paradigm will shine through. Therefore, you have to believe that mankind will keep on finding solutions and developing new paradigms. The forecaster is forced, in short, to become a believer.
Obviously, this is one of the toughest solutions to implement for us as rational human beings. It also requires us to look carefully at each technological field in order to understand the nature of the obstacles, and how long will it take (according to the trends from the past) to come up with a new paradigm to overcome them. Once the forecaster identifies these parameters, he can be more secure in his belief that new paradigms will be discovered and established.
Sixth Solution: Beware of Experts
This is more of an admonishment than an actual solution, but is true all the same. Beware of experts! Experts are people whose knowledge was developed during the previous paradigm, or at best during the current one. They often have a hard time translating their knowledge into useful insights about the next paradigm. While they can highlight all the difficulties existing in the current paradigm, it is up to you to consider how in touch those experts are with the next potential paradigms, and whether or not to listen to their advice. That’s what Arthur C. Clarke’s first law is all about –
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
The Failure of the Paradigm is a daunting one, since it means we can never forecast the future as reliably as we would like to. Nonetheless, business people today can employ the above solutions to be better prepared for the next paradigm, whatever it turns out to be.
Of all the proposed solutions to the Failure of the Paradigm, I like the fourth one the best: read science fiction. It’s a cheap solution that also brings much enjoyment to one’s life. In fact, when I consult for industrial firms, I often hire science fiction writers to write stories about the possible future of the company in light of a few potential paradigms. The resulting stories are read avidly by many of the employees in the company, and in many cases show the executives just how unprepared they are for these new paradigms.
Two weeks ago it was “Back to the Future Day”. More specifically, Doc and Marty McFly reached the future at exactly October 21st, 2015 in the second movie in the series. Me being a futurist, I was invited to several television and radio talk shows to discuss the shape of things to come, which is pretty ridiculous, considering that the future is always about to come, and we should talk about it every day, and not just in a day arbitrarily chosen by the scriptwriters of a popular movie.
All the same, I’ll admit I had an uplifting feeling. On October 21st, everybody was talking about the future. That made me realize something about science fiction: we really need it. Not just for the technological ideas that it gives us (like cellular phones and Tricorders from Star Trek), but also for the expanded view of the future that it provides us with.
Sci-fi movies and book take root in our culture, and establish a longing and an expectation to a well-defined future. In that way, sci-fi creations provide us with a valuable social tool: a radically prolonged Cycle-time, which is the length of time an individual in society tends to look forward to and plan for in advance.
Cycle-times in the Past
As human beings, and as living organisms in general, mother evolution has shaped us into fulfilling one main goal: transferring our genes to our descendants. We are, in a paraphrase of Richard Dawkins’ quote, trucks that carry the load of our genes into the future, as far as possible from our current starting point. It is curious realize that in order to preserve our genes into the future, we must be almost totally aware of the present. A prehistorical person who was not always on the alert for encroaching wolves, lions and tigers, would not have survived very long. Millions of years of evolution have designed living organisms so that they focus almost entirely on the present.
And so, for the first few tens of thousands years of human existence, we ran away from the tigers and chased after the deer, with a very short cycle-time, probably lasting less than a day.
It is difficult, if not impossible, to know when exactly we managed to strike a bargain with Grandfather Time. Such a bargain provided the early humans great power, and all they needed to do in return was to measure and document the passing of hours and days. I believe that we’ve started measuring time quite early in human history, since time measurement brought power, and power ensured survivability and the passing of genes and time measurement methodologies to the next generation.
The first cycle-time was probably quite short, lasting less than a full day. Early humans could roughly calculate how long it will take the sun to set according to its position in the sky, and so they could know when to start or end a hunt before darkness fell. Their cycle-time was a single day. The woman who wanted to know her upcoming menstruation period – which could lead to drawing predators and making it more difficult for her to hunt – could do that by looking at the moon, and by making a mark on a stick every night. Her cycle-time was a full month.
The great leap forward occurred in agricultural civilizations, which were based on an understanding of the cyclical nature of time: a farmer must know the cyclical order of the seasons of the year, and realize their significance for his field and crops. Without looking ahead a full year into the future, agricultural civilizations could not reach their full height. And so, ten thousand years ago, the first agricultural civilizations set a cycle-time of a whole year.
And that is pretty much the way it remained ever since that time.
Religions initially had the potential to provide longer cycle-times. The clergies have often documented history and made an attempt to forecast the future – usually by creating or establishing complex mythologies. Judaism has prolonged the agricultural cycle-time, for example, by setting a seven year cycle of tending one’s field: six years of growing corps, and a seventh year (Shmita, in Hebrew) in which the fields are allowed to rest.
“For six years you are to sow your fields and harvest the crops, but during the seventh year let the land lie unplowed and unused.” – Exodus, 23, 10-11.
Most of the religious promises for the future, however, were usually vague, useless or even harmful. In his book, The Clock of the Long Now, Stewart Brand repeats an old joke that caricaturizes with more than a shred of truth the difficulties of the Abrahamic religions (i.e. Judaism, Christianity and Islam) in dealing with the future and creating useful cycle-times in the minds of their followers. “Judaism,” writes Brand, “says [that] the Messiah is going to come, and that’s the end of history. Christianity says [that] the Messiah is going to come back, and that’s the end of history. Islam says [that] the Messiah came, and history is irrelevant.” [the quote has been slightly modified for brevity]
While this is obviously a joke, it reflects a deeper truth: that religions (and cultures) tend to focus on a single momentous future, and ignore anything else that comes along. Worse, the vision of the future they give us is largely unhelpful since its veracity cannot be verified, and nobody is willing to set an actual date for the coming of the Messiah. Thus, followers of the Abrahamic religions continue their journey into the future, with their eyes covered with opaque glasses that have only one tiny hole to let the light in – and that hole is in the shape of the Messiah.
Why We Need Longer Cycle-times
When civilizations fail to consider the future in long cycle-times, they head towards inevitable failure and catastrophe. Jared Diamond illustrates this point time and time again in his masterpiece Collapse, in which he reviews several extinct civilizations, and the various ways in which they failed to adapt to their environment or plan ahead.
Diamond describes how the Easter Island folks did not think in cycle-times of trees and earth and soil, but instead thought in human shorter cycle-times. They greedily cut down too many of the trees in the island, and over several decades they squandered the island’s natural resources. Similarly, the settlers in Greenland could not think in a cycle-time long enough to contain the grasslands and the changing climate, and were forced to evacuate the island or freeze to death, after their goats and cattle damaged Greenland’s delicate ecology.
The agricultural civilizations, as I wrote earlier, tend to think by nature in cycle-times no longer than several years, and find it difficult to adjust their thinking into longer cycle-times: ones that apply to trees, earth and evolution of animal (and human) evolution. As a result, agricultural civilizations damage all of the above, disrupt their environment, and eventually disintegrate and collapse when their surroundings can’t support them anymore.
If we wish to keep humanity in existence overtime, we must switch to thinking in longer cycle-times that span decades and centuries. This is not to say that we should plan too far ahead – it’s always dangerous to forecast into the long-term – but we should constantly attempt to consider the consequences of our doings in the far-away future. We should always think of our children and grandchildren as we make steps that could determine their fate several decades away from now.
But how can we implement such long-term cycle-times into human culture?
If you still remember where I began this article, you probably realize the answer by now. In order to create cycle-times that last decades and centuries, we need to visit the future again and again in our imagination. We need to compare our achievements in the present to our expectations and visions of the future. This is, in effect, the end-result of science fiction movies and books: the best and most popular of them create new cycle-times that become entwined in human culture, and make us examine ourselves in the present, in the light of the future.
Science fiction movies and stories have an impressive capability to influence social consciousness. Karel Capek’s theater play R.U.R. from 1920, for example, had not only added the word “Robot” to the English lexicon, but has also infected western society with the fear that robots will take over mankind – just as they did in Capek’s play. Another influential movie, The Terminator, was released in 1984 and has solidified and consolidated that fear.
Science fictions does not have to make us fear the future, though. In Japanese culture, the cartoon robot Astro-Boy has become a national symbol in 1952, and ever since that time the Japanese are much more open and accepting towards robots.
The most influential science fiction creations are those that include dates, which in effect are forecasts for certain futures. These forecasts provide us with cycle-times that we can use to anchor our thinking whenever we contemplate the future. When the year 1984 has come, journalists all over the world tried to analyze society and see whether George Orwell’s dark and dystopian dream had actually come true. When October 21st 2015 was reached barely two weeks ago, I was interviewed almost all day long about the technological and societal forecasts made in Back to the Future. And when the year 2029 will finally come – the year in which Skynet is supposed to be controlling humanity according to The Terminator – I confidently forecast that numerous robotics experts will find themselves invited to talk shows and other media events.
As a result of the above science fiction creations, and many others, humanity is beginning to enjoy new and ambitious cycle-times: we look forward in our mind’s eye towards well-designated future dates, and examine whether our apocalyptic or utopian visions for them have actually come true. And what a journey into the future that is! The most humble cycle-times in science fiction span several decades ahead. The more grandiose ones leap forward to the year 2364 (Star Trek), 2800 (Dan Simmons’ Hyperion Cantos) or even to the end of the universe and back again (in Isaac Asimov’s short story The Last Question).
The longest cycle-times of science fiction – those dealing with thousands or even millions of years ahead – may not be particularly relevant for us. The shorter cycle-times of decades and centuries, however, receive immediate attention from society, and thus have an influence on the way we conduct ourselves in the present.
Humanity has great need of new cycle-times that will be far longer than any that were established in its history. While policy makers attempt to take into account forecasts that span decades ahead, the public is generally not exposed or influenced by such reports. Instead, the cycle-times of many citizens are calibrated according to popular science fiction creations.
Hopefully, those longer cycle-times would allow humanity to prepare in advance to existential longer-term challenges, such as ecological catastrophes or social collapse. At the very same time, longer cycle-times can also encourage and push forward innovation in certain areas, as entrepreneurs and innovators struggle to fulfill the prophecies that were made for certain technological developments in the future (just think of all the clunky hoverboards that were invented towards 2015 as proof).
In short, if you want to save the future, just write science fiction!
When Achariya, an ordinary woman from Cambodia got pregnant, she was scared out of her wits. Pregnancy can become a death sentence for women in developing countries, with every year more than half a million mothers dying during pregnancy or child birth. In Cambodia specifically, “maternity-related complications are one of the leading causes of death among women ages 15 to 49”, according to the Population Reference Bureau. Out of every 100,000 women delivering a baby, 265 Cambodian mothers do not make it out of the birth room alive. In comparison, in developed countries like Italy, Australia and Israel, only 4–6 mothers out of 100,000 perish during childbirth.
While there are many different reasons for the abundance in maternal mortality, a prominent one is chronic conditions like anemia caused by iron deficiency in food. Dietary iron deficiency affects about 60% of pregnant Cambodian women, and results in premature labor, and hemorrhages during childbirth.
There is good evidence that iron can leech out of cast-iron cookware, such tools can be too expensive for the average Cambodian family. But in 2008 Christopher Charles, a student from the University of Guelph had a great idea: he and his team distributed iron discs to women in a Cambodian village, asking them to add it to the pot when making soup or boiling water for rice. The iron was supposed to leech from the ingot and into the food in theory. In practice, the women took the iron nuggets, and immediately used them as doorstops, which did not prove as beneficial to their health.
Charles did not let that failure deter him. He realized he needed to find a way to make the women use the iron ingot, and after a conversation with the village elders a solution was found. He recast the iron in the form of a smiling fish – a good luck charm in Cambodian culture. The newly-shaped fish enjoyed newfound success as women in the village began putting it in their dishes, and anemia rate in the village decreased by 43% within 12 months. Today, Charles and his company are upscaling operations, and during 2014 alone have supplied more than 11,000 iron fish to families in Cambodia.
Pace Layer Thinking
For me, the main lesson from the iron fish experiment is that new technology cannot be measured and analyzed without considering the way in which society and current culture will accept it. While this principle sounds obvious, many entrepreneurs overlook it, and find themselves struggling against societal forces out of their control, instead of adapting their inventions so that they be easily accepted by society.
We have here, in essence, a very clear demonstration of the Pace Layering model developed and published by Stewart Brand back in 1999. Brand distinguishes between six different layers which describe society, each of which develops and changes at a pace of its own. Those layers are, in order from the ones that change most rapidly, to the ones that are nearly immovable:
The upper layers are moving forward more rapidly than the lower ones. They are the Uber and Airbnb (commerce layer) that stand in conflict with the Government’s regulations (governance layer). They are the ear extenders (fashion layer) that stand in conflict with the unwritten prohibition to significantly alter one’s body in Western civilization (culture layer). And sometimes they are even revolutionary governmental models used to control the population, as did the communist regimes in USSR which conflict with the very biological nature of the human beings put in control over such countries (governance layer vs. nature layer).
As you can see in the following slide (originally from Brand’s lecture at The Interval), the upper layers are not only the faster ones, but they are discontinuous – meaning that they evolve rapidly and jump forward all the time. Unsurprisingly, these layers are where innovations and revolutions occur, and as a result – they get all the attention.
The lower layers are the continuous ones. Consider culture, for example. It is impressively (and frustratingly) difficult to bring changes into a cultural item like religion. It takes decades – and sometimes thousands of years – to make lasting changes in religion. Once such changes occur, however, they can remain present for similar vast periods of time. And some would say that religion and Culture are blindingly fast when compared to the Nature layer, which is almost impossible to change in the lifetime of the individual.
You can easily argue that the Pace Layer Model is flawed, or missing some parts. Evolutionary psychologists, for example, believe that our psychology is a result of our genetics – and thus would probably put some aspects of Culture, Commerce, Governance and even Fashion at the Nature level. Synthetic biologists would say that today we can play with Nature as we wish, and as a result the Nature level should be jumpstarted to an upper level. It could even be said that companies like Uber (Commerce level) are turning out to have more power than governments (Governance level). Regardless, the model provides us with a good standing point to start with, when we try to think of the present and the future.
What does the Pace Layer Model have to do with the smiling luck fish? Everything and nothing. While I don’t know whether Charles has known of the model, a similar solution could’ve been reached by considering the problem in a Pace Layer thinking style. Charles’ problem, in essence, revolved around creating a new Fashion. He had a hard time doing that without resorting to a lower level – the Culture level – and reshaping his idea in ways that would fit the existing culture.
Pace Thinking about the Israel-Palestine Conflict
We can use Pace Layer thinking to consider other problems and challenges in modern times. It’s particularly interesting for me to analyze about the Israel-Palestine ongoing conflict, from a layer-based point of view.
There is currently a wave of terrorist attacks in Israel, enacted by both Palestinians and Israeli-Arabs from East Jerusalem. I would put this present outbreak at the Fashion level: it’s happening rapidly, it’s contagious (more terrorists are making attempts every day), and it’s drawing all of our attention to it. In short, it’s a crisis which we should ignore when trying to get a better long-term view of the overall problem.
What are the other layers we could work with, in regards to the conflict? There is the Commerce layer, representing the trade happening between Israel and the Palestinian Authority. If we want to lessen the frequency of crises like the current one, we should probably find ways to increase trade between the two parties. We could also consider the Infrastructure and Governance layers, thinking about shared cities, buildings or other infrastructures.
Last but not least – and probably most importantly – we need to consider the Culture layer. There is no denying that some aspects of the conflict revolve around the religions and other cultural habituations of each side. When a young Israeli-Arab gets up from bed in the morning, feels repressed and decides to murder a Jewish citizen, we need to ask ourselves why the culture around him hadn’t encouraged him to turn to other means of expressing his anger, like writing a column in the paper, or getting into politics. So the culture must change – and we need to find ways to bring forth such a change.
Obviously, these preliminary ideas and thoughts are merely starting points for a deeper analysis of the problem, but they serve to highlight the fact that every problem and every conflict can be analyzed in several different layers, none of which should be ignored, and that the best solutions should take into consideration several different layers.
The Pace Layer model of thinking can be a powerful tool in the analysis of every challenge, and could be used in many different cases. We’ll probably use it in the future in other articles on this blog, to analyze different situations and crises and examine the deeper layers that exist under the most fashionable and rapid ones.
In the meantime, I dare you to use the Pace Layer model to consider problems of your own – whether they’re of the national kind or entrepreneurial in nature – and report in the comments section what you’ve found out.