A few months ago I received a tempting offer: to become ISIS’ chief technology officer.
How could I refuse?
Before you pick up the phone and call the police, you should know that it was ‘just’ a wargame, initiated and operated by the strategical consulting firm Wikistrat. Many experts on ISIS and the Middle East in general have taken part in the wargame, and have taken roles in some of the sides that are waging war right now on Syrian soil – from Syrian president Bashar al-Assad, to the Western-backed rebels and even ISIS.
This kind of wargames is pretty common in security organizations, in order to understand what the enemy thinks like. As Harper Lee wrote, “You never really understand a man… until you climb into his skin and walk around in it.”
And so, to understand ISIS, I climbed into its skin, and started thinking aloud and discussing with my ISIS teammates what we could do to really overwhelm our enemies.
But who are those enemies?
In one word, everyone.
This is not an overestimate. Abu Bakr al-Baghdadi, the leader of ISIS and its self-proclaimed caliph, has warned Muslims in 2015 that the organization’s war is – “the Muslims’ war altogether. It is the war of every Muslim in every place, and the Islamic State is merely the spearhead in this war.”
Other spiritual authorities who help explain ISIS’ policies to foreigners and potential converts, agree with Baghdadi. The influential Muslim preacher Abu Baraa, has similarly stated that “the world is divided into two camps. Make sure you are on the side of the Muslims. You shouldn’t be on the side of the infidels, nor should you be on the fence, neutral…”
This approach is, of course, quite comfortable for ISIS, since the organization needs to draw as many Muslims as possible to its camp. And so, thinking as ISIS, we realized that we must find a way to turn this seemingly-small conflict of ours into a full-blown religious war: Muslims against everyone else.
Unfortunately, it seems most Muslims around the world do not agree with those ideas.
How could we convince them into accepting the truth of the global religious war?
It was obvious that we needed to create a fracture between the Muslim and Christian world, but world leaders weren’t playing to our tune. The last American president, Barack Obama, fiercely refused to blame Islam for terror attacks, emphasizing that “We are not at war with Islam.”
French president Francois Hollande was even worse for our cause: after an entire summer of terror attacks in France, he still refused to blame Islam. Instead, he instituted a new Foundation for Islam in France, to improve relations with the nation’s Muslim community.
The situation was clearly dire. We needed reinforcements in fighters from Western countries. We needed Muslims to join us, or at the very least rebel against their Western governments, but very few were joining us from Europe. Reports put the number of European Muslims joining ISIS at barely 4,000, out of 19 million Muslims living in Europe. That means just 0.02% of the Muslim population actually cared enough about ISIS to join us!
Things were even worse in the USA, in which, according to the Pew Research Center, Muslims were generally content with their lives. They were just as likely as other Americans to have earned college degrees and attended graduate schools, and to report household incomes of $100,000 or more. Nearly two thirds of Muslims stated that they “do not see a conflict between being a devout Muslim and living in a modern society”. Not much chance to incite a holy war there.
So we agreed on trying the usual things: planning terror attacks, making as much noise as we possibly could, keep on the fight in the Middle East and recruiting Muslims on social media. But we realized that things really needed to change if radical Islam were to have any chance at all. We needed a new kind of world leader: one who would play by our ideas of a global conflict; one who would close borders for Muslims, and make Muslim immigrants feel unwanted in their countries; one who would turn a deaf ear to the plea of refugees, simply because they came from Muslim countries.
After a single week in ISIS, it was clear that the organization desperately need a world leader who thinks and acts like that.
Do you happen to know someone who might fit that bill?
Brandon Sanderson is one of my favorite fantasy and science fiction authors. He is producing new books in an incredible pace, and his writing quality does not seem to suffer for it. The first book in his recent sci-fi trilogy, Steelheart from The Reckoners series, was published in September 2013. Calamity, the third and last book in the same series was published in February 2016. So just three years passed between the first and the last book in the series.
The books themselves describe a post-apocalyptic future, around ten years away from us. In the first book, the hero lives in the most technologically advanced cities in the world, with electricity, smartphones, and sophisticated technology at his disposal. Sanderson describes sophisticated weapons used by the police forces in the city, including laser weapons and even mechanized war suits. By the third book, our hero reaches another technologically-advanced outpost of humanity, and suddenly is surrounded by weaponized aerial drones.
You may say that the first city chose not to use aerial drones, but that explanation is a bit sketchy, as anyone who has read the books can testify. Instead, it seems to me that in the three years that passed since the original book was published, aerial drones finally made a large enough impact on the general mindset, that Sanderson could no longer ignore them in his vision of a future. He realized that his readers would look askance at any vision of the future that does not include mention of aerial drones of some kind. In effect, the drones have become part of the way we think about the future. We find it difficult to imagine a future without them.
Usually, our visions of the future change relatively slowly and gradually. In the case of the drones, it seems that within three years they’ve moved from an obscure technological item to a common myth the public shares about the future.
Science fiction, then, can show us what people in the present expect the future to look like. And therein lies its downfall.
Where Science Fiction Fails
Science fiction can be used to help us explore alternative futures, and it does so admirably well. However, best-selling books must reach a wide audience, and to resonate with many on several different levels. In order to do that, the most popular science fiction authors cannot stray too far from our current notions. They cannot let go of our natural intuitions and core feelings: love, hate, the appreciation we have for individuality, and many others. They can explore themes in which the anti-hero, or The Enemy, defy these commonalities that we share in the present. However, if the author wants to write a really popular book, he or she will take care not to forego completely the reality we know.
Of course, many science fiction book are meant for ‘in-house’ audience: for the hard-core sci-fi audience who is eager to think beyond the box of the present. Alastair Reynolds in his Revelation Space series, for example, succeeds in writing sci-fi literature for this audience exactly. He’s writing stories that in many aspects transcend notions of individuality, love and humanity. And he’s paying the price for this transgression as his books (to the best of my knowledge) have yet to appear on the New York Times Best Seller list. Why? As one disgruntled reviewer writes about Reynolds’ book Chasm City –
“I prefer reading a story where I root for the protagonist. After about a third of the way in, I was pretty disturbed by the behavior of pretty much everyone.”
Highly popular sci-fi literature is thus forced to never let go completely of present paradigms, which sadly limits its use as a tool to developing and analyzing far-away futures. On the other hand, it’s conceivable that an annual analysis of the most popular sci-fi books could provide us with an understanding of the public state-of-mind regarding the future.
Of course, there are much easier ways to determine how much hype certain technologies receive in the public sphere. It’s likely that by running data mining algorithms on the content of technological blogs and websites, we would reach better conclusions. Such algorithms can also be run practically every hours of every day. So yeah, that’s probably a more efficient route to figuring out how the public views the future of technology.
But if you’re looking for an excuse to read science fiction novels for a purely academic reason, just remember you found it in this blog post.
The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?
In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.
Eight Pathways of Technological Advancement
Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.
Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.
First Pathway: Dematerialization
By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.
Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.
Second Pathway: Virtualization
Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.
Third Pathway: Mobility
Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.
Fourth Pathway: Product Intelligence
This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”
Fifth Pathway: Networking
The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.
Sixth Pathway: Interactivity
As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.
Seventh Pathway: Globalization
Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.
Eighth Pathway: Convergence
Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.
How could genetic engineering progress along the Eight Pathways of technological improvement?
Pathways for Genetic Engineering
First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.
As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.
How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.
If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.
Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.
Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.
Well, then, what would a convergence of all the above pathways look like?
The Convergence of Genetic Engineering
Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.
What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.
When most of us think of the Marine Corps, we usually imagine sturdy soldiers charging headlong into battle, or carefully sniping at an enemy combatant from the tops of buildings. We probably don’t imagine them reading – or writing – science fiction. And yet, that’s exactly what 15 marines are about to do in two weeks from now.
The Marine Corps Warfighting Lab (I bet you didn’t know they have one) and The Atlantic Council are holding a Science Fiction Futures Workshop in early February. And guess what? They’re looking for “young, creative minds”. You probably have to be marines, but even if you aren’t – maybe you’ll have a chance if you submit your application as well.
Twenty years ago, when I was young and beautiful, I picked up a wrapped pack of cards in a computer games store, and read for the first time the tag “Magic: the Gathering”. That was the beginning of my long-time romance with the collectible card game. I imported the game to Israel, translated the rules leaflet to Hebrew for my friends, and went on to play semi-professionally for twenty years, up to the point when I became the Israeli champion. The game has pretty much shaped my years as a teenager, and has helped me make friends and meet interesting people from all over the world.
That is why it’s so sad to me to see the state of the game right now, and realize that it is almost certainly doomed to fail in the long run.
The Rise and Decline of Magic the Gathering
Make no mistake: Magic the Gathering (just Magic in short) is still the top dog among collectible card games in the physical world. According to a report released in 2014, the annual revenue from Magic has grown by 182% between 2009 and 2014, reaching a total value of around $250 million a year. That’s a lot of money, to be sure.
The only problem is that Hearthstone, a digital card game released in the beginning of 2014, has reached annual revenues of around $240 million, in less than two years. I will not be surprised to see the numbers growing even larger that in the future.
This is a bizarre situation. Wizards of the Coast (WotC), the company behind Magic, had twenty years to take the game online and turn it into a success. They failed miserably, and their meager attempts at became a target for scorn and ridicule from players worldwide. While WotC did create an online platform to play Magic on, there were plenty of complaints: for starters, playing was extremely costly since the virtual card packs generally cost the same as packs in the physical world. An evening of playing a draft – a small tournament with only eight players – would’ve cost each player around ten dollars, and would’ve required a time investment of up to four straight hours, much of it wasted in waiting for the other players in the tournament to finish their matches with each other and move on to the next round.
These issues meant that Magic Online was mostly reserved for the top players, who had the money and the willingness to spend it on the game. WotC was aware of the disgruntlement about the state of things, but chose to do nothing – after all, it had no real contenders in the physical or the digital market. What did it have to fear? It had no real reason to change. In fact, the only smart decision WotC managers could take was NOT to take a risk and try to change the online experience, but to keep on making money – and lots of it – from a game that functioned well enough. And they could continue doing so until their business was rudely and abruptly disrupted.
The Business Theory of Disruption
The theory of disruption was originally conceived by Harvard Business School professor Clayton M. Christensen, and described in his best-selling book The Innovator’s Dilemma. Christensen has followed the evolution of several industries, particularly hard drives, but also including metalworking, retail stores and tractors. He found out that in each sector, the managers supported research and development, but all that R&D produced only two general kinds of innovations: sustaining innovations and disruptive ones.
The sustaining innovations were generally those that the customers asked for: increasing hard drive storage capacity, or making data retrieval faster. They led to obvious improvements, which brought immediate and clear benefit to the company in a competitive market.
The disruptive innovations, on the other hand, were those that completely changed the picture, and actually had a good potential to cost the company money in the short-term. Furthermore, the customers saw little value in them, and so the managers saw no advantage in pursuing these innovations. The company-employed engineers who came up with the ideas for disruptive innovations simply couldn’t find support for them in the company.
A good example for the process of disruption is that of the hard drive industry, a few years before the transition from 8-inch drives to 5.25-inch drives occurred. A quick look at the following parameters of the two contenders, back in 1981, explains immediately why managers in the 8-inch drive manufacturing companies were wary of switching over to the 5.25-inch drive market. The 5.25-inch drives were simply inefficient, and lost the competition with 8-inch drives in almost every parameter, except for their size! And while size is obviously important, the computer market at the time consisted mainly of “minicomputers” – computers that cost ~$25,000, and were the size of a small refrigerator. At that size, the physical volume of the hard drives was simply irrelevant.
And so, 8-inch drive companies continued to focus on 8-inch drives, while a few renegade engineers opened new companies and worked hard on developing better 5.25-inch drives. In a few years, the 5.25-inch drives were just as efficient as the 8-inch drives, and a new market formed: that of the personal desktop computer. Suddenly, every computer maker in the market needed 5.25-inch drives.
Now, the 8-inch drive company managers were far from stupid or ignorant. When they saw that there was a market for 5.25-inch drives, they decided to leap on the opportunity as well, and develop their own 5.25-inch drives. Sadly, they were too late. They discovered that it takes time and effort to become acquainted with the demands of the new market, to adapt their manufacturing machinery and to change the entire company’s workflow in order to produce and supply computer makers with 5.25 drives. They joined the competition far too late, and even though they were the leviathans of the industry just ten years ago, they soon sunk to the bottom and were driven out of business.
What happened to the engineers who drove forward the 5.25-inch drives revolution, you may ask? They became CEOs of the new 5.25-inch drive manufacturing companies. A few years later, when their own young engineers came to them and suggested that they invest in developing the new and faulty 3.5-inch drives, they decided that there was no market for this invention right now, no demand for it, and that it’s too inefficient anyway.
Care to guess what happened next? Ten years later, the 3.5-inch drives took over, portable computers utilizing them were everywhere, and the 5.25-inch drive companies crumbled away.
That is the essence of disruption: decisions that make sense in the present, are clearly incorrect in the long term, when markets change. Companies that relax and only invest in sustaining innovations instead of trying to radically change their products and reshape the markets themselves, are doomed to fail. In Peter Diamandis words –
“If you aren’t disrupting yourself, someone else is.”
Now that you understand the basics of the Theory of Disruption, let’s see how it applies to Magic.
Magic and Disruption
Wizards of the Coast has been making almost exclusively sustaining improvements over the last twenty years: its talented R&D team focused almost exclusively on releasing new expansions with new cards and new playing mechanics. WotC also tried to disrupt themselves once by creating the Magic Online platform, but failed to support and nurture this disruptive innovation. The online platform remained mainly as an outdated relic – a relic that made money, to be sure, but was slowly becoming irrelevant in the online world of collectible card games.
In the last five years, many other collectible card games reared their heads online, including minor successes like Shadow Era (200,000 players, ~$156,000 annual revenue) and Urban Rivals (estimated ~$140,000 annual revenue). Each of the above made discoveries in the online world: they realized that players need to be offered cards for free, that they need to be lured to play every day, and that the free-to-play model can still prove profitable since the company’s costs are close to zero: the firm doesn’t need to physically print new cards or to distribute them to retailers. But these upstarts were still so small that WotC could afford to effectively ignore them. They didn’t pose a real threat to Magic.
Then Hearthstone burst into existence in 2014, and everything changed.
Hearthstone’s developers took the best traits of Magic and combined it with all the insights the online gaming industry has developed over recent years. They made the game essentially free to play to attract a large number of players, understanding that their revenues would come from the small fraction of players who spent some money on the game. They minimized time waste by setting a time limit on every player’s turn, and by establishing a rule that players can only act during their own turn (so there’s no need to wait for the other player’s response after every move). They even broke down the Magic draft tournaments of eight people, and made it so that every player who drafted a deck can now play against any other player who drafted a deck at any time. There’s no time waste in Hearthstone – just games to play and fun to be had.
WotC was still deep asleep at that time. In July 2014, Magic brand manager Liz Lamb-Ferro told GamesBeat that –
“If you’re looking for just that immediate face-to-face, back-and-forth action-based game with not a lot of depth to it, then you can find that. … But if you want the extras … then you’re eventually going to find your way to Magic.”
Lamb-Ferro was right – Hearthstone IS a simpler game – but that simplicity streamlines gameplay, and thus makes the game more rapid and enjoyable to many players. And even if we were to accept that Hearthstone does not attract veteran players who “want the extras” (actually, it does), WotC should have realized that other online collectible card games would soon combine Magic’s sophistication with Hearthstone’s mechanisms for streamlining gameplay. And indeed, in 2014 a new game – SolForge – has taken all of the strengths of Hearthstone, while adding a mechanic of card transformation (each card transforming into three different versions of itself) that could only have been possible in card games played online. SolForge doesn’t even have a physical version and could never have one, and the game is already costing Magic a few more veteran players.
This is the point when WotC began realizing that they’re falling far behind the curve. And so, in the middle of 2015 they have released Duels of the Planeswalkers 2016. I won’t even bother detailing all the infuriating problems with the game. Suffice it to say that it has garnered more negative reviews than positive ones, and made clear that WotC were still lagging far behind their competitors in their understanding of the virtual world, user experience, and what players actually want. In short, WotC found themselves in the position of the 8-inch drive manufacturers, realizing suddenly that the market has changed under their noses in less than two years.
What Could WotC do?
The sad truth is that WotC can probably do nothing right now to fix Magic. The firm can continue churning out sustaining improvements – new expansions and new exciting cards – but it will find itself hard pressed to take over the digital landscape. Magic is a game that was designed for the physical world, and not for the current frenzied pace of the virtual collectible card games. Magic simply isn’t suitable for the new market, unless WotC changes the rules so much that it’s no longer the same game.
Could WotC change the rules in such a dramatic fashion? Yes, but at a great cost. The company could recreate the game online with new cards and rules, but it would have to invest time and effort in relearning the workings of the virtual world and creating a new platform for the revised Magic. Unfortunately, it’s not clear that WotC will have time to do that with Hearthstone, SolForge and a horde of other card games snarling at its heels. The future of Magic online does not look bright, to say the least.
Does that mean Magic the Gathering will vanish completely? Probably not. The Magic brand is still strong everywhere except for the virtual world, which means that in the next five years the game will remain in existence mostly in the physical world, where it will bring much joy to children in school breaks, and much money to the pockets of WotC. During these five years, WotC will have the opportunity to rethink and recreate the game for the next big market: virtual and augmented reality. If the firm succeeds in that front, it’s possible that Magic will be reinvented for the new-new market. If it fails and elects to keep the game anchored only in the physical world, then Magic will slowly but surely vanish away as the market changes and new and exciting games take over the attention span of the next generation.
That’s what happens when you disregard the Theory of Disruption.
A week ago I covered in this blog the possibility of using aerial drones for terrorist attacks. The following post dealt with the Failure of Myth and covered Causal Layered Analysis (CLA) – a futures studies methodology meant to counter the Failure of Myth and allow us to consider alternative futures radically different from the ones we tend to consider intuitively.
In this blog post I’ll combine insights from both recent posts together, and suggest ways to deal with the terrorism threat posed by aerial drones, in four different layers suggested by CLA: the Litany, the Systemic view, the Worldview, and the Myth layer.
To understand why we have to use such a wide-angle lens for the issue, I would compare the proliferation of aerial drones to another period in history: the transition between the Bronze Age and the Iron Age.
From Bronze to Iron
Sometime around 1300 BC, iron smelting was discovered by our ancient forefathers, assumedly in the Anatolia region. The discovery rapidly diffused to many other regions and civilizations, and changed the world forever.
If you ask people why iron weapons are better than bronze ones, they’re likely to answer that iron is simply stronger, lighter and more durable than bronze. However, the truth is that bronze weapons are not much more efficient than iron weapons. The real importance of iron smelting, according to “A Short History of War” by Richard A. Gabriel and Karen S. Metz, is this:
“Iron’s importance rested in the fact that unlike bronze, which required the use of relatively rare tin to manufacture, iron was commonly and widely available almost everywhere… No longer was it only the major powers that could afford enough weapons to equip a large military force. Now almost any state could do it. The result was a dramatic increase in the frequency of war.”
It is easy to imagine political and national leaders using only the first and second layer of CLA – the Litany and the Systemic view – at the transition from the Bronze to the Iron Age. “We should bring these new iron weapons to all our soldiers”, they probably told themselves, “and equip the soldiers with stronger shields that can deflect iron weapons”. Even as they enacted these changes in their armies, the worldview itself shifted, and warfare was vastly transformed because of the large number of civilians who could suddenly wield an iron weapon. Generals who thought that preparing for the change merely meant equipping their soldiers with an iron weapon, found themselves on the battlefield facing armies much larger than their own, because of new conscription models that their opponents had developed.
Such changes in warfare and in the existing worldview could have been realized in advance by utilizing the third and fourth layers of CLA – the Worldview and the Myth.
Aerial drones are similar to Iron Age weapons in that they are proliferating rapidly. They can be built or purchased at ridiculously low prices, by practically everyone. In the past, only the largest and most technologically-sophisticated governments could afford to employ aerial drones. Nowadays, every child has them. In other words, the world itself is turning against everything we thought we knew about the possession and use of unmanned aerial vehicles. Such dramatic change – that our descendants may yet come to call The Aerial Age when they look back in history – forces us to rethink everything we knew about the world. We must, in short, analyze the issue from a wide-angle view, with an emphasis on the third and fourth layer of CLA.
How, then, do we deal with the threat aerial drones pose to national security?
First Layer: the Litany
The intuitive way to deal with the threat posed by aerial drones, is simply to reinforce the measures and we’ve had in place before. Under the thinking constraints of the first layer, we should basically strive to strengthen police forces, and to provide larger budgets for anti-terrorist operations. In short, we should do just as we did in the past, but more and better.
It’s easy to see why public systems love the litany layer, since these measures create reputation and generate a general feeling that “we’re doing something to deal with the problem”. What’s more, they require extra budget (to be obtained from congress) and make the organization larger along the way. What’s there not to like?
Second Layer: the Systemic View
Under the systemic view we can think about the police forces, and the tools they have to deal with the new problem. It immediately becomes obvious that such tools are sorely lacking. Therefore, we need to improve the system and support the development of new techniques and methodologies to deal with the new threat. We might support the development of anti-drone weapons, for example, or open an entirely new police department dedicated to dealing with drones. Police officers will be trained to deal with aerial drones, so that nothing is left for chance. The judicial and regulatory systems are lending themselves to the struggle at this layer, by issuing highly-regulated licenses to operate aerial drones.
Again, we could stop the discussion here and still have a highly popular set of solutions. As we delve deeper into the Worldview layer, however, the opposition starts building up.
Third Layer: the Worldview
When we consider the situation at the worldview layer, we see that the proliferation of aerial drones is simply a by-product of several technological trends: miniaturization and condensation of electronics, sophisticated artificial intelligence (at least in terms of 20-30 years ago) for controlling the rotor blades, and even personalized manufacturing with 3D-printers, so that anyone can construct his or her own personal drone in the garage. All of the above lead to the Aerial Age – in which individuals can explore the sky as they like.
Looking at the world from this point of view, we immediately see that the vast expected proliferation of aerial drones in the near decade would force us to reconsider our previous worldviews. Should we really focus on local or systemic solutions, rather than preparing ourselves for this new Aerial Age?
We can look even further than that, of course. In a very real way, aerial drones are but a symptom of a more general change in the world. The Aerial Age is but one aspect of the Age of Freedom, or the Age of the Individual. Consider that the power of designing and manufacturing is being taken from nations and granted to individuals via 3D-printers, powerful personal computers, and the internet. As a result of these inventions and others, individuals today hold power that once belonged only to the greatest nations on Earth. The established worldview, in which nations are the sole holders of power is changing.
When one looks at the issue like this, it is clear that such a dramatic change can only be countered or mitigated by dramatic measures. Nations that want to retain their power and prevent terrorist attacks will be forced to break rules that were created long ago, back in the Age of Nations. It is entirely possible that governments and rulers will have to sacrifice their citizens’ privacy, and turn to monitoring their citizens constantly much as the NSA did – and is still doing to some degree. When an individual dissident has the potential to bring harm to thousands and even millions (via synthetic biology, for example), nations can ill afford to take any chances.
What are the myths that such endeavors will disrupt, and what new myths will they be built upon?
Fourth Layer: the Myth
I’ve already identified a few myths that will be disrupted by the new worldview. First and foremost, we will let go of the idea that only a select few can explore the sky. The new myth is that of Shared Sky.
The second myth to be disrupted is that nations hold all the technological power, while terrorists and dissidents are reduced to using crude bombs at best, or pitchforks at worst. This myth is no longer true, and it will be replaced by a myth of Proliferation of Technology.
The third myth to be dismissed is that governments can protect their citizens efficiently with the tools they have in the present. When we have such widespread threats in the Age of Freedom, governments will experience a crisis in governance – unless they turn to monitoring their citizens so closely that any pretense of privacy is lost. And so, it is entirely possible that in many countries we will see the emergence of a new myth: Safety in Exchange for Privacy.
Last week I’ve analyzed the issue of aerial drones being used for terrorist attacks, by utilizing the Causal Layered Analysis methodology. When I look at the results, it’s easy to see why many decision makers are reluctant to solve problems at the third and fourth layer – Worldview and Myth. The solutions found in the lower layers – the Litany and the Systemic view – are so much easier to understand and to explain to the public. Regardless, if you want to actually understand the possibilities the future holds in any subject, you must ignore the first two layers in the long term, and focus instead on the large picture.
And with that said – happy new year to one and all!
At the 1900 World Exhibition in Paris, French artists made an attempt to forecast the shape of the world in 2000. They produced a few dozens of vivid and imaginative drawings (clearly they did not succumb to the Failure of the Paradigm!)
Here are a few samples from the World Exhibition. Can you tell what all of those have in common with each other?
“If you leaf through a few of them, you quickly notice that each of these books says more about the times in which it was written than about the times it was meant to foretell.”
You only need to take another look at the images to convince yourselves of the truth of Gilbert’s statement. The women and men are dressed in the same way they were dressed in 1900, except for when they go ‘bird hunting’ – in which case the gentlemen wear practical swimming suits, whereas the ladies still stick with their cumbersome dresses underwater. Policemen still employ swords and brass helmets, and of course there are no policewomen. Last but not least, it seems that the future is entirely reserved to the Caucasian race, since nowhere in these drawings can you see persons of African or Asian descent.
The Failure of Myth
While some of the technologies depicted in these ancient paintings actually became reality (Skype is a nice example), it clear the artists completely failed to capture a larger change. You may call this a change in the zeitgeist, the spirit of the generation, or in the myths that surround our existence and lives. I’ll be calling this A Failure of Myth, and I hope you’ll agree that it’s impossible to consider the future without also taking into account these changes in our mythologies and underlying social and cultural assumptions: men can be equal to women, colored folks have rights similar to white folks, and people of the LGBT have just the same right to exist as heterosexuals. None of these assumptions would’ve been obvious, or included in the myths and stories upon which society is bases, a mere fifty years ago. Today they’re being taken for granted.
Could we ever have forecast these changes?
Much as in the Failure of the Paradigm, I would posit that we could never accurately forecast the future ways in which myths and culture is about to change. We could hazard some guesses, but that’s just what they are: a guesswork that relies more on our myths in the present, than on solid understanding of the future.
That said, there are certain methodologies used by foresight researchers that could help us at least chart different solutions to problems in the present, in ways that force us to consider our current myths and worldviews – and challenge them when needed. These methodologies allow us to create alternative futures that could be vastly different from the present in the ways that really matter: how people think of themselves, of each other, and of the world around them.
In the rest of this blog post, I’ll sum up the practical principles of CLA, and show how they could be used to analyze different issues dealing with the future. Following that, in the next blog post, we’ll take a look again at the issue of aerial drones used for terrorist attacks, and use CLA to consider ways to deal with the threat.
CLA – Causal Layered Analysis
The core of CLA the idea that every problem can be looked at in four successive layers, each deeper than the previous one. Let’s look at each layer at its turn, and see how each layer adds depth to a discussion about a certain problem: the “high rate of medical mistakes leading to serious injury or death”, as Inayatullah describes in his book. My brief analysis of this problem at every level is almost entirely based on his examples and thoughts.
First Layer: the Litany
The litany is the day-to-day talk. When you’re arguing at dinner parties about the present and the future, you’re almost certainly using the first layer. You’re basically repeating whatever you’ve heard from the media, from the politicians, from thought leaders and from your family. You may make use of data and statistics, but these are only interpreted according to the prevalent and common worldview that most people share.
When we rely on the first layer to consider the issue of medical mistakes, we look at the problem in a largely superficial manner. We can sum the approach in one sentence: “physicians make mistakes? Teach them better, and if they still don’t improve, throw them to jail!” In effect, we’re focusing on the people who are making the mistake – the ones whom it’s so easy to blame. The solutions in this layer are usually short-term solutions, and can be summed up in short sentences that appeal to audiences who share the same worldview.
Second Layer: the Systemic View
Using the systemic view of the second layer, we try to delve deeper into the issue. We don’t blame people anymore (although that does not mean we remove the responsibility to their mistakes from their shoulders), but instead we try to understand how the system itself can contribute to the actions of the individual. To do that we analyze the social, economic and political forces that meld the system into its current shape.
In the case of medical mistakes, the second layer encourages us to start asking tougher questions about the systems under which physicians operate. Could it be, for example, that physicians are rushing their treatments since they are only allowed to talk with each patient 5-10 minutes, as is the custom in many public medical services? Or perhaps the shape of the hospital does not allow physicians to consult easily with each other, thus reaching more solid solutions via teamwork?
The questions asked in the second layer mode of thinking allow us to improve the system itself and make it more efficient. We do not take the responsibility off the shoulders of the individuals, but we do accept that better systems allow and encourage individuals to reach their maximum efficiency.
Third Layer: Worldview
This is the layer where things get hoary for most people. In this layer we try to identify and question the prevalent worldview and how it contributes to the issue. These are our “cognitive lenses” via which we view and interpret the world.
As we try to analyze the issue of medical mistakes in the third layer, we begin to identify the worldviews behind medicine. We see that in modern medicine, the doctor is standing “high above” in the hierarchy of knowledge – certainly much higher than patients. This hierarchy of knowledge and prestige defines the relationship between the physician and the patient. As we understand this worldview, solutions that would’ve fit in the second layer – like the time physicians spend with patients – seem more like a small bandage on a gut wound, than an effective way to deal with the issue.
Another worldview that can be identified and challenges in this layer is the idea that patients actually need to go to clinics or to hospitals for check-ups. In an era of tele-presence and electronics, why not make use of wearable computing or digital doctors to take care of many patients? As we see this worldview and propose alternatives, we find that systemic solutions like “changing the shape of the hospitals” become unnecessary once more.
Fourth Layer: the Myth
The last layer, the myth, deals with the stories we tell ourselves and our children about the world and the ways things work. Mythologies are defined by Wikipedia as –
“a collection of myths… [and] stories … [that] explain nature, history, and customs.”
Make no mistake: our children’s books are all myths that serve to teach children how they should behave in society. When my son reads about Curious George, he learns that unrestrained curiosity can lead you into danger, but also to unexpected rewards. When he reads about Hensel and Gretel, he learns of the dangers of trusting strangers and step-moms. Even fantasy books teach us myths about the value of wisdom, physical prowess and even beauty as the tall, handsome prince saves the day. Myths are perpetuated everywhere in culture, and are constantly strengthened in our minds through the media.
What can we say about medical mistakes in the Myth level? Inayatullah believes that the deepest problem, immortalized in myth throughout the last two millennia, is that “the doctor knows best”. Patients are taught from a very young age that the physician’s verdict is more important than their own thoughts and feelings, and that they should not argue against it.
While I see the point in Inayatullah’s view, I’m not as certain that it is the reason behind medical mistakes. Instead, I would add a partner-myth: “the human doctor knows best”. This myth is spread to medical doctors in many institutes, and makes it more difficult to them to rely on computerized analysis, or even to consider that as human beings they’re biased by nature.
Consolidating the Layers
As you may have realized by now, CLA is not used to forecast one accurate future, but is instead meant to deepen our thinking about potential futures. Any discussion about long-term issues should open with an analysis of those issues in each of the four layers, so that the solutions we propose – i.e. the alternative futures – can deal not only with the superficial aspects of the issue, but also with the deeper causes and roots.
The Failure of Myth – i.e. our difficulty to realize that the future will not only change technologically, but also in the myths and worldviews we hold – is impossible to counter completely. We can’t know which myths will be promoted by future generations, just as we can’t forecast scientific breakthroughs fifty years in advance.
At most, we can be aware of the existence of the Failure of Myth in every discussion we hold about the future. We must assume, time after time, that the myths of future generations will be different from ours. My grandchildren may look at their meat-eating grandfather in horror, or laugh behind his back at his pants and shirt – while they walk naked in the streets. They may believe that complicated decisions should be left solely to computers, or that physical work should never be performed by human beings. These are just some of the possible myths that future generations can develop for themselves.
In the next blog post, I’ll go over the issue of aerial drones use for terrorist attacks, and analyze it by using CLA to identify a few possible myths and worldviews that we may need to change in order to deal with this threat.
The futurist Ian Pearson, in his fascinating blog The More Accurate Guide to the Future, has recently directed my attention to a new report by Bloomberg Business. Just two days ago, Bloomberg Business published a wonderful short report that identifies ten of the worst-case scenarios for 2016. In order to write the report, Bloomberg’s staff has asked –
“…dozens of former and current diplomats, geopolitical strategists, security consultants, and economists to identify the possible worst-case scenarios, based on current global conflicts, that concern them most heading into 2016.”
I really love this approach, since currently many futurists – particularly the technology-oriented ones – are focusing mainly on all the good that will come to us soon enough. Ray Kurzweil and Tony Seba (in his book Clean Disruption) are forecasting a future with abundant energy; Peter Diamandis believes we are about to experience a new consumerism wave by “the rising billion” from the developing world; Aubrey De-Grey forecasts that we’ll uncover means to stop aging in the foreseeable future. And I tend to agree with them all, at least generally: humanity is rapidly becoming more technologically advanced and more efficient. If these upward trends will continue, we will experience an abundance of resources and a life quality that far surpasses that of our ancestors.
But what if it all goes wrong?
When analyzing the trends of the present, we often tend to ignore the potential catastrophes, the disasters, and the irregularities and ‘breaking points’ that could occur. Or rather, we acknowledge that such irregularities could happen, but we often attempt to focus on the good instead of the bad. If there’s one thing that human beings love, after all, it’s feeling in control – and unexpected events show us the truth about reality: that much of it is out of our hands.
Bloomberg is taking the opposite approach with the current report (more of a short article, really): they have collected ten of the worst-case scenarios that could still conceivably happen, and have tried to understand how they could come about, and what their consequences would be.
The scenarios range widely in the areas they cover, from Putin sidelining America, to Israel attacking Iran’s nuclear facilities, and down to Trump winning the presidential elections in the United States. There’s even mention of climate change heating up, and the impact harsh winters and deadly summers would have on the world.
Strangely enough, the list includes only one scenario dealing with technologies: namely, banks being hit by a massive cyber-attack. In that aspect, I think Bloomberg are shining a light on a very large hole in geopolitical and social forecasting: the fact that technology-oriented futurists are almost never included in such discussions. Their ideas are usually far too bizarre and alienating for the silver-haired generals, retired diplomats and senior consultants who are involved in those discussions. And yet, technologies are a major driving force changing the world. How could we keep them aside?
Technological Worse-Case Scenarios
Here are a few of my own worse-case scenarios for 2016, revolving around technological breakthroughs. I’ve tried to stick to the present as much as possible, so there are no scientific breakthroughs in this list (it’s impossible to forecast those), and no “cure to aging” or “abundant energy” in 2016. That said, quite a lot of horrible stuff could happen with technologies. Such as –
Proliferation of 3D-printed firearms: a single proficient designer could come up with a new design for 3D-printed firearms that will reach efficiency level comparable to that of mass-manufactured weapons. The design will spread like wildfire through peer-to-peer services, and will lead to complete overhaul of the firearm registration protocols in many countries.
First pathogen created by CRISPR technology: biology enthusiasts are now using CRISPR technology – a genetic engineering method so efficient and powerful that ten years ago it would’ve been considered the stuff of science fiction. It’s incredibly easy – at least compared to the past – to genetically manipulate bacteria and viruses using this technology. My worst case scenario in this case is that one bright teenager with the right tools at his hands will create a new pathogen, release it to the environment and worse – brag about it online. Even if that pathogen will prove to be relatively harmless, the mass scare that will follow will stop research in genetic engineering laboratories around the world, and create panic about Do-It-Yourself enthusiasts.
A major, globe-spanning A. disaster: whether it’s due to hacking or to simple programming mistake, an important A.I. will malfunction. Maybe it will be one – or several – of the algorithms currently trading at stock markets, largely autonomously since they’re conducting a new deal every 740 nanoseconds. No human being can follow their deals on the spot. A previous disaster in that front has already led in 2012 to one algorithm operated by Knight Capital, purchasing stocks at inflated costs totaling $7 billion – in just 45 minutes. The stock market survived (even if Knight Capital’s stock did not), but what would happen if a few algorithms go out of order at the same time, or in response to one another? That could easily happen in 2016.
First implant virus: implants like cardiac pacemakers, or external implants like insulin pumps, can be hacked relatively easily. They do not pack much in the way of security, since they need to be as small and energy efficient as possible. In many cases they are also relying on wireless connection with the external environment. In my worst-case scenario for 2016, a terrorist would manage to hack a pacemaker and create a virus that would spread from one pacemaker to another by relying on wireless communication between the devices. Finally, at a certain date – maybe September 11? – the virus would disable all pacemakers at the same time, or make them send a burst of electricity through the patient’s heart, essentially sending them into a cardiac arrest.
This blog post is not meant to create panic or mass hysteria, but to highlight some of the worst-case scenarios in the technological arena. There are many other possible worst-case scenarios, and Ian Perarson details a few others in his blog post. My purpose in detailing these is simple: we can’t ignore such scenarios, or keep on living our lives with the assumption that “everything is gonna be alright”. We need to plan ahead and consider worst-case scenarios to be better prepared for the future.
Do you have ideas for your own technological worst-case scenarios for the year 2016? Write them down in the comments section!
In this post we’ll embark on a journey back in time, to the year 2000, when you were young and eager students. You’re sitting in a lecture given by a bald and handsome futurist. He’s promising to you that within 15 years, i.e. in the year 2015, the exponential growth in computational capabilities will ensure that you will be able to hold a super-computer in your hands.
“Yeah, right,” a smart-looking student sniggers loudly, “and what will we do with it?”
The futurist explains that the future you will watch movies, and hear music with that tiny computer. You exchange bewildered looks with your friends. You all find that difficult to believe in – how can you store large movies on such a small computer? The futurist explains that another trend – that of exponential growth in data storage – will mean that your hand-held super-computer will also store tens of thousands of megabytes.
You see some people in the audience rolling their eyes – promises, promises! Yet you are willing to keep on listening. Of course, the futurist then completely jumps off the cliff of rationality, and promises that in 15 years, everyone will enjoy wireless connectivity almost everywhere, at a speed of tens of megabytes per second.
“That makes no sense.” The smart student laughs again. “Who will ever need such a wireless network? Almost nobody has laptop computers anyway!”
The futurist reminds you that everyone is going to carry super-computers on their bodies in the future. The heckler laughs again, loudly.
The Failure of Segregation
I assume you realize the point by now. The failure demonstrated in this exchange is what I call The Failure of Segregation. It is an incredibly common failure, stemming from our need to focus on only a single trend, and missing the combined and cumulative impacts of two, three or even ten trends at the same time.
In the example above, the forecast made by the futurist would not have been reasonable if only one trend was analyzed. Who needs a superfast Wi-Fi if there aren’t advanced laptops and smartphones to use it? Almost nobody. So from a rational point of view, there’s no reason to invest in such a wireless network. It is only when you consider three trends together – exponential growth in computational capabilities, data storage and wireless network – that you can understand the future.
Every product we enjoy today, is the result of several trends coming into fruition together. Facebook, for example, would not have been nearly as successful if not for these trends –
Exponential growth in computational capabilities, so that nearly everyone has a personal computer.
Miniaturization and mobilization of computers into smartphones.
Exponential improvement of digital cameras, so that every smartphone has a camera today.
Cable internet everywhere.
Wireless internet (Wi-Fi) everywhere.
Cellular internet connections provided by the cellular phone companies.
GPS receiver in every smartphone.
The social trend of people using online social networks.
These are only eight trends, but I’m sure there are many others standing behind Facebook’s success. Only by looking at all eight trends could we have hoped to forecast the future accurately.
Unfortunately, it’s not that easy to look into all the possible trends at the same time.
A Problem of Complexity
Let’s say that you are now aware of the Failure of Segregation, and so you try to contemplate all of the technological trends together, to obtain a more accurate image of the future. If you try to consider just three technological trends (A, B and C) and the ways they could work together to create new products, you would have four possible results: AB, AC, BC and ABC. That’s not so bad, is it?
However, if you add just one more technological trend to the mix, you’ll find yourself with eleven possible results. Do the calculations yourself if you don’t believe me. The formula is relatively simple, with N being the number of trends you’re considering, and X being the number of possible combinations of trends –
It’s obvious that for just ten technological trends, there are about a thousand different ways to combine them together. Considering twenty trends will cause you a major headache, and will bring the number of possible combinations up to one million. Add just ten more trends, and you get a billion possible combinations.
To give you an understanding of the complexity of the task on hand, the international consulting firm Gartner has taken the effort to map 37 of the most highly expected technological trends in their Gartner’s 2015 Hype Cycle. I’ll let you do the calculations yourself for the number of combinations stemming from all of these trends.
The problem, of course, becomes even more complicated once you realize you can combine the same two, three or ten technologies to achieve different results. Smart robots (trend A) enjoying machine learning capabilities (trend B) could be used as autonomous cars, or they could be used to teach pupils in class. And of course, throughout this process we pretend to know that said trends will be continue just the way we expect them to – and trends rarely do that.
What you should be realizing by now is that the opposite of the Failure of Segregation is the Failure of Over-Aggregation: trying to look at tens of trends at the same time, even though the human brain cannot hold such an immense variety of resultant combinations and solutions.
So what can we do?
Dancing between Failures
Sadly, there’s no golden rule or a simple solution to these failures. The important thing is to be aware of their existence, so that discussions about the future cannot be oversimplified into considering just one trend, detached from the others.
Professional futurists use a variety of methods, including scenario development, general morphological analysis and causal layered analysis to analyze the different trends and attempt to recombine them into different solutions for the future. These methodologies all have their place, and I’ll explain them and their use in other posts in the future. However, for now it should be clear that the incredibly large number of possible solutions makes it impossible to consider only one future with any kind of certainty.
In some of the future posts in this series, I’ll delve deeper into the various methodologies designed to counter the two failures. It’s going to be interesting!
I often imagine myself meeting James Clark Maxwell, one of the greatest physicists in the history of the Earth, and the one indirectly responsible for almost all the machinery we’re using today – from radio to television sets and even power plants. He was recognized as a genius in his own time, and became a professor at the age of 25 years old. His research resulted in Maxwell’s Equations, which describe the connection between electric and magnetic fields. Every electronic device in existence today, and practically all the power stations transmitting electricity to billions of souls worldwide – they all owe their existence to Maxwell’s genius.
And yet when I approach that towering intellectual of the 19th century in my imagination, and try to tell him about all that has transpired in the 20th century, I find that he does not believe me. That is quite unseemly of him, seeing as he is a figment of my imagination, but when I devote some more thought to the issue, I realize that he has no reason to accept any word that I say. Why should he?
At first I decide to go cautiously with the old boy, and tell him about the X-rays – whose discovery was made in 1895, just 26 years after Maxwell’s death. “Are you talking of light that can go through the human body and chart all the bones in the way?” he asks me incredulously. “That’s impossible!”
And indeed, there is no scientific school in 1879 – Maxwell’s death date – that can support the idea of X-rays.
I decide to jump ahead and skip the theory of relativity, and instead tell him about the atom bomb that demolished Nagasaki and Hiroshima. “Are you trying to tell me that just by banging together two pieces of that chemical which you call Uranium 235, I can release enough energy to level an entire town?” he scoffs. “How gullible do you think I am?”
And once again, I find that I cannot fault him for disbelieving my claims. According to all the scientific knowledge from the 19th century, energy cannot come from nowhere. Maxwell, for all his genius, does not believe me, and could not have forecast these advancements when he was alive. Indeed, no logical forecasters from the 19th century would have made these predictions about the future, since they suffered from the Failure of the Paradigm.
A paradigm, according to Wikipedia, is “a distinct set of concepts or thought patterns”. In this definition one could include theories and even research methods. More to the point, a paradigm describes what can and cannot happen. It sets the boundaries of belief for us, and any forecast that falls outside of these boundaries requires the forecaster to come up with extremely strong evidence to justify it.
Up to our modern times and the advent of science, paradigms changed in a snail-like pace. People in the medieval times largely figured that their children would live and die the same way as they themselves did, as would their grandchildren and grand-grandchildren, up to the day of rapture. But then Science came, with thousands of scientists researching the movement of the planets, the workings of the human body – and the connections between the two. And as they uncovered the mysteries of the universe and the laws that govern our bodies, our planets and our minds, paradigms began to change, and the impossible became possible and plausible.
The discovery of the X-rays is just one example of an unexpected shift in paradigms. Other such shifts include –
Using nuclear energy in reactors and in bombs
Lord Rutherford – the “father of nuclear physics” in the beginning of the 20th century, often denigrated the idea that the energy existing in matter would be utilized by mankind, and yet one year after his death, the fission of the uranium nucleus was discovered.
According to the legend, the great experimental physicist Michael Faraday was paid a visit by governmental representatives back in the 19th century. Faraday showed the delegation his clunky and primitive electric motors – the first of their kind. The representatives were far from impressed, and one of them asked “what could possibly be the use for such toys?” Faraday’s answer (which is probably more urban myth than fact) was simple – “what use is a newborn baby?”
Today, our entire economy and life are based on electronics and on the power obtained from electric power plants – all of them based on Faraday’s innovations, and completely unexpected at his time.
Induced Pluripotent Stem Cells
This paradigm shift has happened just nine years ago. It was believed that biological cells, once they mature, can never ‘go back’ and become young again. Shinya Yamanaka other researchers have turned that belief on its head in 2006, by genetically engineering mature human cells back into youth, turning them into stem cells. That discovery has earned Yamanaka his 2012 Nobel prize.
How Paradigms Advance
It is most illuminating to see how computers have advanced throughout the 20th century, and have constantly shifted from one paradigm to the other along the years. From 1900 to the 1930s, computers were electromechanical in nature: slow and cumbersome constructs with electric switches. As technology progressed and new scientific discoveries were made, computers progressed to using electric relay technology, and then to vacuum tubes.
One of the first and best known computers based on vacuum tubes technology is the ENIAC (Electronic Numerical Integrator and Computer), which weighed 30 tons and used 200 kilowatts of electricity. It could perform 5,000 calculations a second – a task which every smartphone today exceeds without breaking a sweat… since the smartphones are based on new paradigms of transistors and integrated circuits.
At each point in time, if you were to ask most computer scientists whether computers could progress much beyond their current state of the art, the answer would’ve been negative. If the scientists and engineers working on the ENIAC were told about a smartphone, they would’ve been completely baffled. “How can you put so many vacuum tubes into one device?” they would’ve asked. “and where’s the energy to operate them all going to come from? This ‘smartphone’ idea is utter nonsense!”
And indeed, one cannot build a smartphone with vacuum tubes. The entire computing paradigm needed to change in order for this new technology to appear on the world’s stage.
What does the Failure of the Paradigm mean? Essentially what it means is that we cannot reliably forecast a future that is distant enough for a paradigm shift to occur. Once the paradigm changes, all previous limitations and boundaries are absolved, and what happens next is up to grabs.
This insight may sound gloomy, since it makes clear that reliable forecasts are impossible to make a decade or two into the future. And yet, now that we understand our limitations we can consider ways to circumvent them. The solutions I’ll propose for the Failure of the Paradigm are not as comforting as the mythical idea that we can know the future, but if you want to be better prepared for the next paradigm, you should consider employing them.
Solutions for the Failure of the Paradigm
First Solution: Invent the New Paradigm Yourself
The first solution is quite simple: invent the new paradigm yourself, and thus be the one standing on top when the new paradigm takes hold. The only problem is, nobody is quite certain what the next paradigm is going to be. This is the reason why we see the industry giants of today – Google, Facebook, and others – buying companies left-and-right. They’re purchasing drone companies, robotics companies, A.I. companies, and any other idea that looks as if it has a chance to grow into a new and successful paradigm a decade from now. They’re spreading and diversifying their investments, since if even one of these investments leads into the new paradigm, they will be the Big Winners.
Of course, this solution can only work for you if you’re an industry giant, with enough money to spare on many futile directions. If you’re a smaller company, you might consider the second solution instead.
Second Solution: Utilize New Paradigms Quickly
The famous entrepreneur Peter Diamandis often encourages executives to invite small teams of millennials into their factories and companies, and asking them to actively come up with ideas to disrupt the current workings of the company. The millennials – people between 20 to 30 years old – are less bound by ancient paradigms than the people currently working in most companies. Instead, they are living the new paradigms of social media, internet everywhere, constant surveillance and loss of privacy, etc. They can utilize and deploy the new paradigms rapidly, in a way that makes the old paradigms seem antique and useless.
This solution, then, helps executives circumvent the Failure of the Paradigm by adapting to new paradigms as quickly as possible.
Third Solution: Forecast Often, and Read Widely
One of the rules for effective Forecasting, as noted futurist Paul Saffo wrote in Harvard Business Review in 2007, is to forecast often. The proficient forecaster needs to be constantly on the alert for new discoveries and breakthroughs in science and technology – and be prepared to suggest new forecasts accordingly.
The reason behind this rule is that new paradigms rarely (if ever) appear out of the blue. There are always telltale signs, which are called Weak Signals in foresight slang. Such weak signals can be uncovered by searching for new patents, reading Scientific American, Science and Nature to find out about new discoveries, and generally browsing through the New York Times every morning. By so doing, one can be certain to have better hunch about the oncoming of a new paradigm.
Fourth Solution: Read Science Fiction
You knew that one was coming, didn’t you? And for a good reason, too. Many science fiction novels are based on some kind of a paradigm shift occurring, that forces the world to adapt to it. Sometimes it’s the creation of the World Wide Web (which William Gibson speculated about in his science fiction works), or rockets being sent to the moon (As was the case in Jules Verne’s book – “From the Earth to the Moon”), or even dealing with cloning, genetic engineering and bringing back extinct species, as in Michael Crichton’s Jurassic Park.
Science fiction writers consider the possible paradigm shifts and analyze their consequences and implications for the world. Gibson and other science fiction writers understood that if the World Wide Web will be created, then we’ll have to deal with cyber-hackers, with cloud computing, and with mass-democratization of information. In short, they forecast the implications of the new paradigm shift.
Science fiction does not provide us with a solid forecast for the future, then, but it helps us open our minds and escape the Failure of the Paradigm by considering many potential new paradigms at the same time. While there is no research to support this claim, I truly believe that avid science fiction readers are better prepared for new paradigms than everyone else, as they’ve already lived those new paradigms in their minds.
Fifth Solution: Become a Believer
When trying to look far into the future, don’t focus on the obstacles of the present paradigm. Rather if you constantly see that similar obstacles have been overcome in the past (as happened with computers), there is a good reason to assume that the current obstacles will be defeated as well, and a new paradigm will shine through. Therefore, you have to believe that mankind will keep on finding solutions and developing new paradigms. The forecaster is forced, in short, to become a believer.
Obviously, this is one of the toughest solutions to implement for us as rational human beings. It also requires us to look carefully at each technological field in order to understand the nature of the obstacles, and how long will it take (according to the trends from the past) to come up with a new paradigm to overcome them. Once the forecaster identifies these parameters, he can be more secure in his belief that new paradigms will be discovered and established.
Sixth Solution: Beware of Experts
This is more of an admonishment than an actual solution, but is true all the same. Beware of experts! Experts are people whose knowledge was developed during the previous paradigm, or at best during the current one. They often have a hard time translating their knowledge into useful insights about the next paradigm. While they can highlight all the difficulties existing in the current paradigm, it is up to you to consider how in touch those experts are with the next potential paradigms, and whether or not to listen to their advice. That’s what Arthur C. Clarke’s first law is all about –
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
The Failure of the Paradigm is a daunting one, since it means we can never forecast the future as reliably as we would like to. Nonetheless, business people today can employ the above solutions to be better prepared for the next paradigm, whatever it turns out to be.
Of all the proposed solutions to the Failure of the Paradigm, I like the fourth one the best: read science fiction. It’s a cheap solution that also brings much enjoyment to one’s life. In fact, when I consult for industrial firms, I often hire science fiction writers to write stories about the possible future of the company in light of a few potential paradigms. The resulting stories are read avidly by many of the employees in the company, and in many cases show the executives just how unprepared they are for these new paradigms.