I am a Futures Studies researcher with special expertise in foresight, wild cards (black swans), and analysis of emerging and disruptive technologies. I have spoken to hundreds of conference audiences and appeared frequently on TV and radio to discuss for a wide range of listeners and viewers the fascinating study of the future of technology and society. Most recently, I have been lecturing for and providing consultation to entities worldwide including large companies and firms, the Lahav Management School at Tel Aviv University and other educational institutions, and as an invited keynote lecturer to innovation and global thinking workshops internationally including Greece, Kazakhstan, and Belarus.
I’ve finally had the chance to watch Star Wars – The Force Awakens, and I’m not going to sweeten the deal: It was incredibly mediocre. The director mainly played up on nostalgia value to replace the need for humor, real drama or character development. I’m not saying you shouldn’t watch it – just don’t set your expectations too high.
The really interesting thing in the movie for me, though, was the ongoing Failure of the Paradigm woven throughout the movie. As has often been mentioned in the past, Star Wars is in fact a medieval tale of knights in a shiny armor, a princess in distress (an actual princess! in space!), an evil dark wizard and some father-son unresolved issues. So yeah, we have a civilization that is technologically advanced enough to travel between planets at warp speed without much need for fuel, but we see no similar developments in any other fields: no nano-robots, no human augmentation, no biological warfare, no computer-brain interface, and absolutely no artificial intelligence. And please don’t insult my intelligence by claiming that R2D2 has one.
Star Wars: a medieval space tale of knights and damsels in distress. Image originally from GeekTyrant
The question we should be asking is why. Why would any script writer ignore so many of these potential technological developments – some of which are bound to pop up in the next few decades – and focus instead on plots around which countless other stories have been told and retold throughout thousands of years?
The answer is the Failure of Paradigm: we are stuck in the current paradigm of humanity, love, heroes and free will expressed by biological entities. It takes a superb director and script writer – the Wachowskis’ The Matrix comes to mind – to create an excellent movie that makes you rethink those paradigms. But if you stick with the current paradigms, all you need is an average script, an average director and a lot of explosions to create a blockbuster.
Star Wars is a great example of how NOT to make a science fiction movie. It does not explore the boundaries of what’s possible and impossible in any significant way. It does not make us consider the impact of new technologies, or the changing structure of humanity. It sticks to the old lines and old terms: evil vs. good, empire vs. rebels, father vs. son, and a dashing hero with a bumbling damsel in distress (even though the damsel in the new movie is male). It is not science fiction. Instead, it is a fantasy movie.
And that’s great for some people. Heck, maybe even most people. That’s why it’s the ruling paradigm at the moment – it makes people feel happy and content. But I can’t help thinking and regretting the opportunity lost here. A movie with such a huge audience could make people think. The director could have involved a sophisticated AI in the plot, to make people consider the future of working with artificial virtual assistants. Instead we got a clownish robot. And destroying planets with cannons, requiring immense energy output? What evil empire in its right mind would use such an inefficient method? Why not, instead, just reprogram a single bacteria to create ‘grey goo’ – a self-replicating nano-robot that can devour all humans in its path in order to make more replicas of itself?
The answer is obvious: developments like these would make this fictional world too different from anything we’re willing to accept. In a world of sophisticated risk-calculating AI, there’s not much place for heroics. In a world of nano-technology, there’s no place for wasteful explosions. And in a world with brain-machine interfaces, it is entirely possible that there’s no place for love, biological or otherwise. All of these paradigms that are inherent to us would be gone, and that’s a risk most directors and script writers just aren’t willing to take.
So go – watch the new Star Wars movie, for old time sakes. But after you do that, don’t skimp on some other science fiction movies from the last couple of years that force us to rethink our paradigms. I recommend Chappie and Ex Machina from the last year in particular. These movies may not have the same number of eager followers, and in some cases they are quite disturbing (Chappie only received a rating of 31% in Rotten Tomatoes) – but they will make you think between the explosions. And in the end, isn’t that what we should expect from our science fiction?
The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?
In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.
Eight Pathways of Technological Advancement
Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.
Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.
First Pathway: Dematerialization
By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.
Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.
Second Pathway: Virtualization
Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.
Thanks to virtualization, crash dummies everywhere can relax. Image originally from @TheCrashDummies.
Third Pathway: Mobility
Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.
Fourth Pathway: Product Intelligence
This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”
Fifth Pathway: Networking
The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.
Sixth Pathway: Interactivity
As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.
Seventh Pathway: Globalization
Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.
Eighth Pathway: Convergence
Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.
How could genetic engineering progress along the Eight Pathways of technological improvement?
Pathways for Genetic Engineering
First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.
As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.
How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.
If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.
Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.
Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.
Well, then, what would a convergence of all the above pathways look like?
The Convergence of Genetic Engineering
Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.
What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.
Conclusion
If you’ve been reading this blog for some time, you may have noticed a recurring pattern: I’ll be inquiring into a certain subject, and then analyzing it according to a certain foresight methodology. Such posts have covered so far the Business Theory of Disruption (used to analyze the future of collectible card games), Causal Layered Analysis (used to analyze the future of aerial drones and of medical mistakes) and Pace Layer Thinking. I hope to go on giving you some orderly and proven methodologies that help thinking about the future.
How you actually use these methodologies in your business, class or salon talk – well, that’s up to you.
When most of us think of the Marine Corps, we usually imagine sturdy soldiers charging headlong into battle, or carefully sniping at an enemy combatant from the tops of buildings. We probably don’t imagine them reading – or writing – science fiction. And yet, that’s exactly what 15 marines are about to do in two weeks from now.
The Marine Corps Warfighting Lab (I bet you didn’t know they have one) and The Atlantic Council are holding a Science Fiction Futures Workshop in early February. And guess what? They’re looking for “young, creative minds”. You probably have to be marines, but even if you aren’t – maybe you’ll have a chance if you submit your application as well.
Two days ago, the picture above was posted on Facebook by Tom Martindale –
Two things are immediately obvious:
The ‘planet’ to the right is actually the moon with the United States stretched all over it;
About two thousand people thought it was important enough to share this obvious hoax to their friends.
So – are there indeed two thousand people ignorant enough to share this message without realizing just how ridiculous it is? Isn’t that a reason to be worried about the state of the nation, about people’s education, and also to bemoan the tendency of social media to spread rumors far and wide without any criticism?
Not necessarily.
About two days ago, when the image was still fresh on Facebook and only gathered 500 shares, I took the liberty of going through all the “shares” of the picture that Facebook felt fit to show me. Altogether, I browsed through 86 “shares” – barely a fifth of the full number of people who shared the picture, but still a significant amount. I divided the shares into three categories-
Identified the hoax: Shares by people who recognized the hoax, or that their friends explained to them about the hoax in their replies.
Fooled by the hoax: Shares by people who explicitly mentioned that we were destroying the Earth, which I’m assuming means they thought the picture is authentic.
Unknown: Shares by people who didn’t write anything about the picture, and whose friends did not reply either. We can’t know whether they shared the picture because they believe it is authentic, or because they wanted to have a good laugh about the hoax with their friends.
Care to guess how many people fell for the hoax?
The results are pretty clear. Out of the 86 shares, only one treated the picture explicitly as if it symbolized the destruction of the Earth. Of the other 85 shares, 40 dismissed the picture outright or had it dismissed for them by their friends, while the rest are unknown – they didn’t write anything about the picture in their share.
That’s actually very impressive. If we assume that the “shares” I counted reflect the overall distribution of shares, it means that for every person who fell victim to the hoax, we have forty people who identified it outright as a hoax, or had it explained to them immediately by their friends.
What can we learn from this (admittedly small) piece of data?
First, just because a certain image gets shared around the social networks, it doesn’t automatically mean that the sharers actually believe it is true or even worth reading. Many may be sharing it simply to ridicule others. I know this isn’t really a newsflash for all of you reading this post, but with everyone being so gloomy about the state of the nation’s ignorance and gullibility, it’s a good thing to keep in mind.
Second, while social networks are often rightly accused of spreading rumors, lies and misperceptions, it’s impossible to ignore their positive effects. Ignorant people can be found in every crowd, but they often don’t even know how ignorant they actually are. In the social network, it can be difficult to remain ignorant unless you’re doing so by choice. Whatever you share is open to debate, to criticism, to ridicule and to corrections by people who often know more and care more for the subject than you do.
Obviously, that’s not the end of the issue by far. Social networks can also be used to spread untruths of many kinds. In many issues, the loudest and most rabid voices are the most heard. If an alien from outer space would’ve logged into Facebook today, he would’ve figured that GMOs are hazardous to your health, vaccines cause autism, and marijuana cures cancer. At least two of the above are clearly and demonstrably false, and yet each conspiracy theory has gathered a large crowd of believers who will defend it online to their dying breath from any rational argument.
So: social networks – are they good or bad for public knowledge and understanding? That’s obviously a false dichotomy. Social networks work just like the agora – the gathering place where all the Greek citizens came together to discuss matters. They bring the agora to us, which means we’re going to get approached by many charlatans peddling their wares and beliefs, and also by the skeptics who are trying to warn us off. Social networks take away the loneliness of the individual, and turn us into a crowd – for good AND for bad at the same time.
Twenty years ago, when I was young and beautiful, I picked up a wrapped pack of cards in a computer games store, and read for the first time the tag “Magic: the Gathering”. That was the beginning of my long-time romance with the collectible card game. I imported the game to Israel, translated the rules leaflet to Hebrew for my friends, and went on to play semi-professionally for twenty years, up to the point when I became the Israeli champion. The game has pretty much shaped my years as a teenager, and has helped me make friends and meet interesting people from all over the world.
That is why it’s so sad to me to see the state of the game right now, and realize that it is almost certainly doomed to fail in the long run.
Magic: The Gathering. The game that has bankrupt thousands of parents.
The Rise and Decline of Magic the Gathering
Make no mistake: Magic the Gathering (just Magic in short) is still the top dog among collectible card games in the physical world. According to a report released in 2014, the annual revenue from Magic has grown by 182% between 2009 and 2014, reaching a total value of around $250 million a year. That’s a lot of money, to be sure.
The only problem is that Hearthstone, a digital card game released in the beginning of 2014, has reached annual revenues of around $240 million, in less than two years. I will not be surprised to see the numbers growing even larger that in the future.
This is a bizarre situation. Wizards of the Coast (WotC), the company behind Magic, had twenty years to take the game online and turn it into a success. They failed miserably, and their meager attempts at became a target for scorn and ridicule from players worldwide. While WotC did create an online platform to play Magic on, there were plenty of complaints: for starters, playing was extremely costly since the virtual card packs generally cost the same as packs in the physical world. An evening of playing a draft – a small tournament with only eight players – would’ve cost each player around ten dollars, and would’ve required a time investment of up to four straight hours, much of it wasted in waiting for the other players in the tournament to finish their matches with each other and move on to the next round.
These issues meant that Magic Online was mostly reserved for the top players, who had the money and the willingness to spend it on the game. WotC was aware of the disgruntlement about the state of things, but chose to do nothing – after all, it had no real contenders in the physical or the digital market. What did it have to fear? It had no real reason to change. In fact, the only smart decision WotC managers could take was NOT to take a risk and try to change the online experience, but to keep on making money – and lots of it – from a game that functioned well enough. And they could continue doing so until their business was rudely and abruptly disrupted.
The Business Theory of Disruption
The theory of disruption was originally conceived by Harvard Business School professor Clayton M. Christensen, and described in his best-selling book The Innovator’s Dilemma. Christensen has followed the evolution of several industries, particularly hard drives, but also including metalworking, retail stores and tractors. He found out that in each sector, the managers supported research and development, but all that R&D produced only two general kinds of innovations: sustaining innovations and disruptive ones.
The sustaining innovations were generally those that the customers asked for: increasing hard drive storage capacity, or making data retrieval faster. They led to obvious improvements, which brought immediate and clear benefit to the company in a competitive market.
The disruptive innovations, on the other hand, were those that completely changed the picture, and actually had a good potential to cost the company money in the short-term. Furthermore, the customers saw little value in them, and so the managers saw no advantage in pursuing these innovations. The company-employed engineers who came up with the ideas for disruptive innovations simply couldn’t find support for them in the company.
A good example for the process of disruption is that of the hard drive industry, a few years before the transition from 8-inch drives to 5.25-inch drives occurred. A quick look at the following parameters of the two contenders, back in 1981, explains immediately why managers in the 8-inch drive manufacturing companies were wary of switching over to the 5.25-inch drive market. The 5.25-inch drives were simply inefficient, and lost the competition with 8-inch drives in almost every parameter, except for their size! And while size is obviously important, the computer market at the time consisted mainly of “minicomputers” – computers that cost ~$25,000, and were the size of a small refrigerator. At that size, the physical volume of the hard drives was simply irrelevant.
And so, 8-inch drive companies continued to focus on 8-inch drives, while a few renegade engineers opened new companies and worked hard on developing better 5.25-inch drives. In a few years, the 5.25-inch drives were just as efficient as the 8-inch drives, and a new market formed: that of the personal desktop computer. Suddenly, every computer maker in the market needed 5.25-inch drives.
Now, the 8-inch drive company managers were far from stupid or ignorant. When they saw that there was a market for 5.25-inch drives, they decided to leap on the opportunity as well, and develop their own 5.25-inch drives. Sadly, they were too late. They discovered that it takes time and effort to become acquainted with the demands of the new market, to adapt their manufacturing machinery and to change the entire company’s workflow in order to produce and supply computer makers with 5.25 drives. They joined the competition far too late, and even though they were the leviathans of the industry just ten years ago, they soon sunk to the bottom and were driven out of business.
What happened to the engineers who drove forward the 5.25-inch drives revolution, you may ask? They became CEOs of the new 5.25-inch drive manufacturing companies. A few years later, when their own young engineers came to them and suggested that they invest in developing the new and faulty 3.5-inch drives, they decided that there was no market for this invention right now, no demand for it, and that it’s too inefficient anyway.
Care to guess what happened next? Ten years later, the 3.5-inch drives took over, portable computers utilizing them were everywhere, and the 5.25-inch drive companies crumbled away.
That is the essence of disruption: decisions that make sense in the present, are clearly incorrect in the long term, when markets change. Companies that relax and only invest in sustaining innovations instead of trying to radically change their products and reshape the markets themselves, are doomed to fail. In Peter Diamandis words –
“If you aren’t disrupting yourself, someone else is.”
Now that you understand the basics of the Theory of Disruption, let’s see how it applies to Magic.
Magic and Disruption
Wizards of the Coast has been making almost exclusively sustaining improvements over the last twenty years: its talented R&D team focused almost exclusively on releasing new expansions with new cards and new playing mechanics. WotC also tried to disrupt themselves once by creating the Magic Online platform, but failed to support and nurture this disruptive innovation. The online platform remained mainly as an outdated relic – a relic that made money, to be sure, but was slowly becoming irrelevant in the online world of collectible card games.
In the last five years, many other collectible card games reared their heads online, including minor successes like Shadow Era (200,000 players, ~$156,000 annual revenue) and Urban Rivals (estimated ~$140,000 annual revenue). Each of the above made discoveries in the online world: they realized that players need to be offered cards for free, that they need to be lured to play every day, and that the free-to-play model can still prove profitable since the company’s costs are close to zero: the firm doesn’t need to physically print new cards or to distribute them to retailers. But these upstarts were still so small that WotC could afford to effectively ignore them. They didn’t pose a real threat to Magic.
Then Hearthstone burst into existence in 2014, and everything changed.
Hearthstone’s developers took the best traits of Magic and combined it with all the insights the online gaming industry has developed over recent years. They made the game essentially free to play to attract a large number of players, understanding that their revenues would come from the small fraction of players who spent some money on the game. They minimized time waste by setting a time limit on every player’s turn, and by establishing a rule that players can only act during their own turn (so there’s no need to wait for the other player’s response after every move). They even broke down the Magic draft tournaments of eight people, and made it so that every player who drafted a deck can now play against any other player who drafted a deck at any time. There’s no time waste in Hearthstone – just games to play and fun to be had.
WotC was still deep asleep at that time. In July 2014, Magic brand manager Liz Lamb-Ferro told GamesBeat that –
“If you’re looking for just that immediate face-to-face, back-and-forth action-based game with not a lot of depth to it, then you can find that. … But if you want the extras … then you’re eventually going to find your way to Magic.”
Lamb-Ferro was right – Hearthstone IS a simpler game – but that simplicity streamlines gameplay, and thus makes the game more rapid and enjoyable to many players. And even if we were to accept that Hearthstone does not attract veteran players who “want the extras” (actually, it does), WotC should have realized that other online collectible card games would soon combine Magic’s sophistication with Hearthstone’s mechanisms for streamlining gameplay. And indeed, in 2014 a new game – SolForge – has taken all of the strengths of Hearthstone, while adding a mechanic of card transformation (each card transforming into three different versions of itself) that could only have been possible in card games played online. SolForge doesn’t even have a physical version and could never have one, and the game is already costing Magic a few more veteran players.
This is the point when WotC began realizing that they’re falling far behind the curve. And so, in the middle of 2015 they have released Duels of the Planeswalkers 2016. I won’t even bother detailing all the infuriating problems with the game. Suffice it to say that it has garnered more negative reviews than positive ones, and made clear that WotC were still lagging far behind their competitors in their understanding of the virtual world, user experience, and what players actually want. In short, WotC found themselves in the position of the 8-inch drive manufacturers, realizing suddenly that the market has changed under their noses in less than two years.
What Could WotC do?
The sad truth is that WotC can probably do nothing right now to fix Magic. The firm can continue churning out sustaining improvements – new expansions and new exciting cards – but it will find itself hard pressed to take over the digital landscape. Magic is a game that was designed for the physical world, and not for the current frenzied pace of the virtual collectible card games. Magic simply isn’t suitable for the new market, unless WotC changes the rules so much that it’s no longer the same game.
Could WotC change the rules in such a dramatic fashion? Yes, but at a great cost. The company could recreate the game online with new cards and rules, but it would have to invest time and effort in relearning the workings of the virtual world and creating a new platform for the revised Magic. Unfortunately, it’s not clear that WotC will have time to do that with Hearthstone, SolForge and a horde of other card games snarling at its heels. The future of Magic online does not look bright, to say the least.
Does that mean Magic the Gathering will vanish completely? Probably not. The Magic brand is still strong everywhere except for the virtual world, which means that in the next five years the game will remain in existence mostly in the physical world, where it will bring much joy to children in school breaks, and much money to the pockets of WotC. During these five years, WotC will have the opportunity to rethink and recreate the game for the next big market: virtual and augmented reality. If the firm succeeds in that front, it’s possible that Magic will be reinvented for the new-new market. If it fails and elects to keep the game anchored only in the physical world, then Magic will slowly but surely vanish away as the market changes and new and exciting games take over the attention span of the next generation.
That’s what happens when you disregard the Theory of Disruption.
Whenever a futurist talks about the future and lays out all the dazzling wealth technological advancements hold in store for us, there is one question that is always asked by the audience.
“Where is that flying car you promised me?”
Well, we may be drawing near to a future of flying cars. While the road to that future may still be long and arduous, I’m willing to forecast that in twenty years from now we will have flying cars for use by civilians – but only if three technological and societal conditions will be fulfilled by that time.
In order to understand these conditions, let us first examine briefly the history of flying cars, and understand the reasons behind their absence in the present.
Flying Cars from the Past
Surprising as it may be, the concept of flying cars has been around far longer than the Back to the Future trilogy. Henry Ford himself had produced in 1926 a rudimentary and experimental ‘flying car’, although really it was more of a mini-airplane for the average American consumer. Despite the excitement from the public, the idea crashed and burned in two years, together with the prototype and its test pilot.
One of the forgotten historical flying cars. A prototype of the Ave Mizar.
Since the 1920s, it seems like innovators and inventors came up with flying cars almost once a decade. You can see pictures of some of these cars in Popular Mechanics’ gallery. Some crashed and burned, in the tradition set by Ford. Others managed to soar sky high. None actually made it to mass production, for two main reasons:
Extremely wasteful: flying cars are extremely wasteful in terms of fuel consumption. Their energy efficiency is abysmal when compared to that of high-altitude and high-speed airplanes.
Extremely unsafe: let’s be honest for a moment, OK? You give people cars that can drive in what is essentially a one-dimensional road, and what do they do? They make traffic accidents. What do you think would happen if you gave everyone the ability to drive a car in three dimensions? Crash, crash and burn all over again. For flying cars to become widely used in society, everyone needs to take flying lessons. Good luck with that.
These two limitations together made sure that flying cars to the masses were left a fantasy – and still largely are. In fact, I would go as far as saying that any new concept or prototype of a flying car that does not take these challenges into account, is only presented to the public as a ‘flying car’ as a publicity stunt.
But now, things are beginning to change, because of three trends that together will provide answers to the main barriers standing in the way of flying cars.
The Three Trends that will Enable Flying Cars
There are three trends that, combined, will enable the use of flying cars by the public within twenty years.
First Trend: Massive Improvement in Aerial Drones Capabilities
If you visit your city’s playgrounds, you may find children there having fun flying drones around. The drones they’re using – which often cost less than $200 – would’ve considered highly sophisticated weapons of war just twenty years ago, and would’ve been sold by arms manufactures at prices in the order of millions of dollars.
14 years old Morgan Tien with his drone. Source: Bend Bulletin
Dr. Peter Diamandis, innovator, billionaire and futurist, has written in 2014 about the massive improvement in capabilities of aerial drones. Briefly, current-day drones are a product of exponential improvement in computing elements (inertial measurement units), communications (GPS receivers and system), and even sensors (digital cameras). All of the above – at their current sizes and prices – would not have been available even ten years ago.
Aerial drones are important for many reasons, not least because they may yet serve as the basis for a flying car. Innovators, makers and even firms today are beginning to strap together several drones, and turn them into a flying platform that can carry individuals around.
The most striking example of this kind comes from a Canadian inventor who has recently flown 275 meters on a drone platform he has basically fashioned in his garage.
Another, a more cumbersome version of Human-Transportation Drones (Let’s call them HTD from now on, shall we?) was demonstrated this week at the Las Vegas Convention Center. It is essentially a tiny helicopter with four double-propellers attached, much like a large drone. It has place for just one traveler, and can fly up to 23 minutes according to the manufacturers. Most importantly, the Ehang 184 as it’s called is supposed to be autonomous, which brings us straight to the next trend: the rise of machine intelligence.
Ehang 184. Credit: Ehang. Originally found on Gizmag.
Second Trend: Machine Intelligence and Flying Cars
There can be little question that drones will keep on improving in their capabilities. We will improve our understanding of the science and technology behind aerial drones, and develop more efficient tools for aerial travel, including some that will carry people around. But will these tools be available for mass-use?
This is where the safety barrier comes into the picture. You can’t let the ordinary Joe Shmoe control a vehicle like the Ehang 184, or even a light-weight drone platform. Not without teaching them how to fly the thing, which would take a long time to practice, lots of money, and will sharply limit the number of potential users.
This is where machine intelligence comes into the picture.
Autonomous control is virtually a must for publicly usable HTDs. Luckily, machine intelligence is making leaps and bounds forward, with autonomous (driverless) cars travelling the roads even today. If such autonomous systems can function for cars on the roads, why not do the same for drones in the air?
As things currently stand, all aerial drones will have to be controlled at least partly-autonomously, in order to prevent collisions with other drones. NASA is planning a “Traffic Management Convention” for drones, which could include tens of thousands of drones – and much more than that, if the need arises. The next logical step, therefore, is to include future HTDs into this future system, thus taking the control out of the pilot’s hands and transferring it completely to the vehicle and the system controlling it.
If the said system for managing aerial traffic becomes a reality, and assuming that drones capabilities are advanced enough to provide human transportation services, then autonomous HTDs for mass use will not be far behind.
The two last trends have covered the second barrier of inherent unsafety. The third trend I will present now deals with the first barrier of inefficient and wasteful use of energy.
Third Trend: Solar Energy
All small drones rely on electricity to function. Even a larger drone like the Ehang 184 that could be used for human transport, is powered by electricity, and can fly for 23 minutes before requiring a recharge. While 23 minutes may not sound like a lot of time, it’s more than enough for people to ‘hop’ from one side of most cities to the other, as long as there isn’t aerial congestion.
Of course, that’s the situation today. But batteries keep on improving. Elon Musk claims that by 2017, Tesla’s electric cars will have a 600 mile range on a single charge, for example. As batteries improve further, HTDs will be able to stay in the air for even longer periods of time, despite being powered by electricity alone. The adherence to electricity is important since in twenty years from now it is highly likely that we’ll have much cheaper electric energy coming directly from the sun.
Support for this argument comes from the exponential decline in the costs associated with producing and utilizing solar energy. Forty years ago, it would’ve cost about $75 to produce one watt of solar energy. Today the cost is less than a single dollar per watt. And as prices go down, the number of solar panels installation soars sky-high, roughly doubling itself every two years. Worldwide solar capacity in 2014 has been 53 times higher than in 2005.
If the rising trend of solar energy does not grind to a halt sometime in the next decade, then we will obtain much of our electric energy from the sun. We won’t have usable passenger solar airplanes – these need high-energy jet fuel to operate – but we will have solar panels pretty much everywhere: covering the sides and top of every building, and quite possibly every car as well. Buildings would both consume and produce energy. Much of the unneeded energy would be saved in batteries, or almost instantaneously diverted via the smart grid to other spots in the city where it’ll be needed.
If that is the face of the future – and the trends support this view – then HTDs could be an optimal way of transportation in the city of the future. Aerial drones could be deployed on tops of houses and skyscrapers, where they will be constantly charged by solar panels until they need to take a passenger to another house. Such a leap would only take 10-15 minutes, followed by a recharging period of 30 minutes or so. The entire system would operate autonomously – without human control or interference – and be powered by the sun.
Conclusions and Forecast for the Future
When can we expect this system to be deployed? Obviously it’s difficult to be certain about the future, particularly in cases where technological trends meet with societal, legal and political barriers to entry. Current culture will find it difficult to accept autonomous vehicles, and Big Fossil Fuel firms are still trying to pretend solar energy isn’t here to stay.
All the same, it seems that HTDs are already rearing their heads, with several inventors working separately to produce them. Their attempts are still extremely hesitant, but every attempt demonstrates the potential in HTDs and their viability for human transportation. I would therefore expect that in the next five years we will see demonstrations of HTDs (not for public use yet) that can carry individuals to a distance of at least one mile, and can be fully charged within one hour by solar panels alone. That is the easy forecast to make.
The more difficult forecast involves the use of autonomous aerial drones, the assimilation of HTDs into an overarching system that controls all the drones in a shared aerial space, and a mass-deployment of HTDs in a city. Each of these achievements needs to be made separately in order to fulfill the larger vision of a flying car to the masses. I am going to take a wild guess here, and suggest that if no Hindenburg-like disaster happens, then we’ll see real flying cars in our cities in twenty years from now – by the year 2035. It is likely that these HTDs will only be able to carry a single individual, and will probably be used more as a ‘flying taxi’ service between buildings to individual businessmen than a full-blown family flying car.
And then, finally, when people ask me where their flying car is, I will be able to provide a simple answer: “It’s parked on the roof.”
A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.
As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.
First Myth: Robots must be shaped as Humanoids
Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.
Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.
Second Myth: Robots have a Computer for a Brain
This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.
The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.
Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.
This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.
Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.
Third Myth: Robots as Individual Units
This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.
Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.
The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –
“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.
Fourth Myth: Robots can’t make Moral Decisions
In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.
Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?
These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.
And so, robots must make moral decisions.
Conclusion
As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?
This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?
That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!
Almost four years ago, the presidential elections took place in the United States. Barack Obama competed against Mitt Romney in the race for the White House. Both candidates delivered inspiring speeches, appeared in every institute that would accept their presence, and employed hundreds of paid consultants and volunteers who advertised them throughout the nation. In the end, Obama won the race for the presidency, possibly because of his opinions and ideas… or because of his reliance on data scientists. In fact, as Sasha Issenberg’s article of the 2012 elections in MIT Technology Review describes –
“Romney’s data science team was less than one-tenth the size of Obama’s analytics department.”
How did Obama utilize all of those data scientists?
Analyzing the Individual Voter
Up to 2012, individual voters were analyzed according to a relatively simplistic system which only took into account very limited parameters such as age, place of living, etc. The messages those potential voters received to their phones, physical mailboxes and virtual inboxes were customized according to these parameters. Obama’s team of data scientists expanded the list of parameters into tens of different parameters and criteria. They then utilized a system in which customized messages were mailed to certain representative voters, who were later surveyed so that the scientists could figure out how their opinions changed according to the structure of the messages sent.
This level of analysis and understanding of the individual voters and the messages that helped them change their opinions aided Obama in delivering the right messages, at the right time, to the persuadable people. If the term “persuadable” strikes you as sinister, as if Obama’s team were preying on the weak of mind or those sitting on the fence, you should be aware that it was used by Terry Walsh, who coordinated Obama’s campaign’s polling and paid-media spending.
Of course, being a “persuadable” voter does not mean that you’re a helpless dummy. Rather, it just means that you’re still uncertain which way to turn. But when political parties can find those undecided voters, focus on them and analyze each one with the most sophisticated computer models available to find out all about their levers and buttons, how much free choice does that leave those people?
I could go on describing other strategies utilized by Obama’s team in the 2012 elections. They identified voters who were likely to ‘switch sides’ following just one phone call, and had about 500,000 conversations with those voters. They supplied to a data collection firm the addresses of many “easily persuadable” voters, and received in return the records of TV watching in those households. That way, the campaign team could maximize the efficiency of TV advertisements – fitting them to the right time, in the right channels, and in the right destinations. All of the above is well recorded, and described in Issenberg’s article and other resources (like this, that, and others).
The Republican Drowning Whale
Obama wasn’t the only one to utilize big data and predictive analytics in the 2012 campaign. His opponent, Mitt Romney, had a team of data scientists of his own. Unfortunately for Romney, his team didn’t even come close to the level of operations of Obama’s team. Romney’s team invested much of its effort in an app named Orca, which was supposed to indicate which of the expected republican voters actually turned up to vote – and to send messages to the republican slackers and encourage them to haul their tucheses to the voting booths. In practice, the app was horribly conceived, and crashed numerous times during Election Day, leading to utter confusion about the goings on.
Mitt Romney being packed up after the massive failure of the Orca system in the 2012 predisential elections. Image originally from Phil Ebersole’s blog.
Regardless of the success of the Democrats data systems vs. the Republicans’ ones, one thing is clear: both parties are going to use big data and predictive analytics in the upcoming 2016 elections. In fact, we are going into a very interesting stage in the history of the 21st century: the Data Race.
From Space to Data
The period in time known as the Space Race took place in the 1960s, when the United States competed against the Soviet Union in a race to space. As a result of the Space Race, space launch technologies developed and made progress in leaps and bounds, with both countries fighting to demonstrate their superior science and technology. Great need – and great budgets – produce great results quickly.
In 2016, we will see a new kind of race starting – the Data Race. In 2012 it wasn’t really a race. The Democrats basically stepped on the Republicans. In 2016, however, the real Data Race in politics will be on: The Democrats will gather their teams of data scientists once more, and build up on the piles of data that were gathered in the 2012 elections and since then. The Republicans – possibly Trump with his self-funded election campaign – will learn from their mistakes in 2012, hire the best data scientists they can find, and utilize methodologies similar or better than those developed by the Democrats.
In short, both parties will find themselves in the midst of a Data Race, striving to obtain as much data as they can about the American citizen, about our lifestyles, habits, choices and any other tidbit of information that can be used to understand the individual voter – and how best to approach him or here and convert him to the party’s point of view. The data gathering and analysis systems will cost a lot, obviously, but since recent rulings in America allow larger contributions to be made to political candidates, money should not be a problem.
Conclusion: Where are We Heading?
It’s quite obvious that both American parties in 2016 are going to compete in a Data Race between them. The bigger questions is whether we should even allow them to do it so freely. Democracy, after all, is based on the assumption that every person can make his or her own mind and decisions. Do we really honor that core assumption, when political candidates can analyze human beings with the power of super-computers, big data and predictive analytics? Can an individual citizen truly choose freely, when powers on both sides are pulling and pushing at that individual’s levers and buttons, with methods tested and proven on millions of similarly-minded individuals?
Using predictive analytics in politics holds an inherent threat to democracy: by understanding each individual, we can also devise approaches and methodologies to influence every individual with maximal efficiency. This approach has the potential to turn most individuals into mere puppets in the hands of the powerful and the affluent.
Does that mean we should refrain from using big data and predictive analytics in politics? Of course not – but we can regulate its use so that instead of campaign managers focusing their efforts on the “easily persuadable”, they will use the data gleaned from the public to understand people’s real concerns and work to address them. We should all hope our politicians are heading in that direction, and if they aren’t – we should give them a shove towards it.
A week ago I covered in this blog the possibility of using aerial drones for terrorist attacks. The following post dealt with the Failure of Myth and covered Causal Layered Analysis (CLA) – a futures studies methodology meant to counter the Failure of Myth and allow us to consider alternative futures radically different from the ones we tend to consider intuitively.
In this blog post I’ll combine insights from both recent posts together, and suggest ways to deal with the terrorism threat posed by aerial drones, in four different layers suggested by CLA: the Litany, the Systemic view, the Worldview, and the Myth layer.
To understand why we have to use such a wide-angle lens for the issue, I would compare the proliferation of aerial drones to another period in history: the transition between the Bronze Age and the Iron Age.
From Bronze to Iron
Sometime around 1300 BC, iron smelting was discovered by our ancient forefathers, assumedly in the Anatolia region. The discovery rapidly diffused to many other regions and civilizations, and changed the world forever.
If you ask people why iron weapons are better than bronze ones, they’re likely to answer that iron is simply stronger, lighter and more durable than bronze. However, the truth is that bronze weapons are not much more efficient than iron weapons. The real importance of iron smelting, according to “A Short History of War” by Richard A. Gabriel and Karen S. Metz, is this:
“Iron’s importance rested in the fact that unlike bronze, which required the use of relatively rare tin to manufacture, iron was commonly and widely available almost everywhere… No longer was it only the major powers that could afford enough weapons to equip a large military force. Now almost any state could do it. The result was a dramatic increase in the frequency of war.”
It is easy to imagine political and national leaders using only the first and second layer of CLA – the Litany and the Systemic view – at the transition from the Bronze to the Iron Age. “We should bring these new iron weapons to all our soldiers”, they probably told themselves, “and equip the soldiers with stronger shields that can deflect iron weapons”. Even as they enacted these changes in their armies, the worldview itself shifted, and warfare was vastly transformed because of the large number of civilians who could suddenly wield an iron weapon. Generals who thought that preparing for the change merely meant equipping their soldiers with an iron weapon, found themselves on the battlefield facing armies much larger than their own, because of new conscription models that their opponents had developed.
Such changes in warfare and in the existing worldview could have been realized in advance by utilizing the third and fourth layers of CLA – the Worldview and the Myth.
Aerial drones are similar to Iron Age weapons in that they are proliferating rapidly. They can be built or purchased at ridiculously low prices, by practically everyone. In the past, only the largest and most technologically-sophisticated governments could afford to employ aerial drones. Nowadays, every child has them. In other words, the world itself is turning against everything we thought we knew about the possession and use of unmanned aerial vehicles. Such dramatic change – that our descendants may yet come to call The Aerial Age when they look back in history – forces us to rethink everything we knew about the world. We must, in short, analyze the issue from a wide-angle view, with an emphasis on the third and fourth layer of CLA.
How, then, do we deal with the threat aerial drones pose to national security?
First Layer: the Litany
The intuitive way to deal with the threat posed by aerial drones, is simply to reinforce the measures and we’ve had in place before. Under the thinking constraints of the first layer, we should basically strive to strengthen police forces, and to provide larger budgets for anti-terrorist operations. In short, we should do just as we did in the past, but more and better.
It’s easy to see why public systems love the litany layer, since these measures create reputation and generate a general feeling that “we’re doing something to deal with the problem”. What’s more, they require extra budget (to be obtained from congress) and make the organization larger along the way. What’s there not to like?
Second Layer: the Systemic View
Under the systemic view we can think about the police forces, and the tools they have to deal with the new problem. It immediately becomes obvious that such tools are sorely lacking. Therefore, we need to improve the system and support the development of new techniques and methodologies to deal with the new threat. We might support the development of anti-drone weapons, for example, or open an entirely new police department dedicated to dealing with drones. Police officers will be trained to deal with aerial drones, so that nothing is left for chance. The judicial and regulatory systems are lending themselves to the struggle at this layer, by issuing highly-regulated licenses to operate aerial drones.
An anti-drone gun. Originally from BattelleInnovations and downloaded from TechTimes
Again, we could stop the discussion here and still have a highly popular set of solutions. As we delve deeper into the Worldview layer, however, the opposition starts building up.
Third Layer: the Worldview
When we consider the situation at the worldview layer, we see that the proliferation of aerial drones is simply a by-product of several technological trends: miniaturization and condensation of electronics, sophisticated artificial intelligence (at least in terms of 20-30 years ago) for controlling the rotor blades, and even personalized manufacturing with 3D-printers, so that anyone can construct his or her own personal drone in the garage. All of the above lead to the Aerial Age – in which individuals can explore the sky as they like.
Exploration of the sky is now in the hands of individuals. Image originally from DailyMail India.
Looking at the world from this point of view, we immediately see that the vast expected proliferation of aerial drones in the near decade would force us to reconsider our previous worldviews. Should we really focus on local or systemic solutions, rather than preparing ourselves for this new Aerial Age?
We can look even further than that, of course. In a very real way, aerial drones are but a symptom of a more general change in the world. The Aerial Age is but one aspect of the Age of Freedom, or the Age of the Individual. Consider that the power of designing and manufacturing is being taken from nations and granted to individuals via 3D-printers, powerful personal computers, and the internet. As a result of these inventions and others, individuals today hold power that once belonged only to the greatest nations on Earth. The established worldview, in which nations are the sole holders of power is changing.
When one looks at the issue like this, it is clear that such a dramatic change can only be countered or mitigated by dramatic measures. Nations that want to retain their power and prevent terrorist attacks will be forced to break rules that were created long ago, back in the Age of Nations. It is entirely possible that governments and rulers will have to sacrifice their citizens’ privacy, and turn to monitoring their citizens constantly much as the NSA did – and is still doing to some degree. When an individual dissident has the potential to bring harm to thousands and even millions (via synthetic biology, for example), nations can ill afford to take any chances.
What are the myths that such endeavors will disrupt, and what new myths will they be built upon?
Fourth Layer: the Myth
I’ve already identified a few myths that will be disrupted by the new worldview. First and foremost, we will let go of the idea that only a select few can explore the sky. The new myth is that of Shared Sky.
The second myth to be disrupted is that nations hold all the technological power, while terrorists and dissidents are reduced to using crude bombs at best, or pitchforks at worst. This myth is no longer true, and it will be replaced by a myth of Proliferation of Technology.
The third myth to be dismissed is that governments can protect their citizens efficiently with the tools they have in the present. When we have such widespread threats in the Age of Freedom, governments will experience a crisis in governance – unless they turn to monitoring their citizens so closely that any pretense of privacy is lost. And so, it is entirely possible that in many countries we will see the emergence of a new myth: Safety in Exchange for Privacy.
Conclusion
Last week I’ve analyzed the issue of aerial drones being used for terrorist attacks, by utilizing the Causal Layered Analysis methodology. When I look at the results, it’s easy to see why many decision makers are reluctant to solve problems at the third and fourth layer – Worldview and Myth. The solutions found in the lower layers – the Litany and the Systemic view – are so much easier to understand and to explain to the public. Regardless, if you want to actually understand the possibilities the future holds in any subject, you must ignore the first two layers in the long term, and focus instead on the large picture.
And with that said – happy new year to one and all!
At the 1900 World Exhibition in Paris, French artists made an attempt to forecast the shape of the world in 2000. They produced a few dozens of vivid and imaginative drawings (clearly they did not succumb to the Failure of the Paradigm!)
Here are a few samples from the World Exhibition. Can you tell what all of those have in common with each other?
Police motorcycles in the year 2000Skype in the year 2000Phonecalls and radio in the year 2000Fishing for birds in the year 2000
Psychologist Daniel Gilbert wrote about similar depictions of the future in his book “Stumbling on Happiness” –
“If you leaf through a few of them, you quickly notice that each of these books says more about the times in which it was written than about the times it was meant to foretell.”
You only need to take another look at the images to convince yourselves of the truth of Gilbert’s statement. The women and men are dressed in the same way they were dressed in 1900, except for when they go ‘bird hunting’ – in which case the gentlemen wear practical swimming suits, whereas the ladies still stick with their cumbersome dresses underwater. Policemen still employ swords and brass helmets, and of course there are no policewomen. Last but not least, it seems that the future is entirely reserved to the Caucasian race, since nowhere in these drawings can you see persons of African or Asian descent.
The Failure of Myth
While some of the technologies depicted in these ancient paintings actually became reality (Skype is a nice example), it clear the artists completely failed to capture a larger change. You may call this a change in the zeitgeist, the spirit of the generation, or in the myths that surround our existence and lives. I’ll be calling this A Failure of Myth, and I hope you’ll agree that it’s impossible to consider the future without also taking into account these changes in our mythologies and underlying social and cultural assumptions: men can be equal to women, colored folks have rights similar to white folks, and people of the LGBT have just the same right to exist as heterosexuals. None of these assumptions would’ve been obvious, or included in the myths and stories upon which society is bases, a mere fifty years ago. Today they’re being taken for granted.
The myth according to which black people have very few real rights was overturned in the 1960s. Few forecasters thoguht of such an occurence in advance.
Could we ever have forecast these changes?
Much as in the Failure of the Paradigm, I would posit that we could never accurately forecast the future ways in which myths and culture is about to change. We could hazard some guesses, but that’s just what they are: a guesswork that relies more on our myths in the present, than on solid understanding of the future.
That said, there are certain methodologies used by foresight researchers that could help us at least chart different solutions to problems in the present, in ways that force us to consider our current myths and worldviews – and challenge them when needed. These methodologies allow us to create alternative futures that could be vastly different from the present in the ways that really matter: how people think of themselves, of each other, and of the world around them.
In the rest of this blog post, I’ll sum up the practical principles of CLA, and show how they could be used to analyze different issues dealing with the future. Following that, in the next blog post, we’ll take a look again at the issue of aerial drones used for terrorist attacks, and use CLA to consider ways to deal with the threat.
Another Failure of Myth: the ancient Greek could not imagine a future without slavery. None of their great philosophers could escape the myth of slavery. Image originally from Wikipedia
CLA – Causal Layered Analysis
The core of CLA the idea that every problem can be looked at in four successive layers, each deeper than the previous one. Let’s look at each layer at its turn, and see how each layer adds depth to a discussion about a certain problem: the “high rate of medical mistakes leading to serious injury or death”, as Inayatullah describes in his book. My brief analysis of this problem at every level is almost entirely based on his examples and thoughts.
First Layer: the Litany
The litany is the day-to-day talk. When you’re arguing at dinner parties about the present and the future, you’re almost certainly using the first layer. You’re basically repeating whatever you’ve heard from the media, from the politicians, from thought leaders and from your family. You may make use of data and statistics, but these are only interpreted according to the prevalent and common worldview that most people share.
When we rely on the first layer to consider the issue of medical mistakes, we look at the problem in a largely superficial manner. We can sum the approach in one sentence: “physicians make mistakes? Teach them better, and if they still don’t improve, throw them to jail!” In effect, we’re focusing on the people who are making the mistake – the ones whom it’s so easy to blame. The solutions in this layer are usually short-term solutions, and can be summed up in short sentences that appeal to audiences who share the same worldview.
Second Layer: the Systemic View
Using the systemic view of the second layer, we try to delve deeper into the issue. We don’t blame people anymore (although that does not mean we remove the responsibility to their mistakes from their shoulders), but instead we try to understand how the system itself can contribute to the actions of the individual. To do that we analyze the social, economic and political forces that meld the system into its current shape.
In the case of medical mistakes, the second layer encourages us to start asking tougher questions about the systems under which physicians operate. Could it be, for example, that physicians are rushing their treatments since they are only allowed to talk with each patient 5-10 minutes, as is the custom in many public medical services? Or perhaps the shape of the hospital does not allow physicians to consult easily with each other, thus reaching more solid solutions via teamwork?
The questions asked in the second layer mode of thinking allow us to improve the system itself and make it more efficient. We do not take the responsibility off the shoulders of the individuals, but we do accept that better systems allow and encourage individuals to reach their maximum efficiency.
Third Layer: Worldview
This is the layer where things get hoary for most people. In this layer we try to identify and question the prevalent worldview and how it contributes to the issue. These are our “cognitive lenses” via which we view and interpret the world.
As we try to analyze the issue of medical mistakes in the third layer, we begin to identify the worldviews behind medicine. We see that in modern medicine, the doctor is standing “high above” in the hierarchy of knowledge – certainly much higher than patients. This hierarchy of knowledge and prestige defines the relationship between the physician and the patient. As we understand this worldview, solutions that would’ve fit in the second layer – like the time physicians spend with patients – seem more like a small bandage on a gut wound, than an effective way to deal with the issue.
Another worldview that can be identified and challenges in this layer is the idea that patients actually need to go to clinics or to hospitals for check-ups. In an era of tele-presence and electronics, why not make use of wearable computing or digital doctors to take care of many patients? As we see this worldview and propose alternatives, we find that systemic solutions like “changing the shape of the hospitals” become unnecessary once more.
Fourth Layer: the Myth
The last layer, the myth, deals with the stories we tell ourselves and our children about the world and the ways things work. Mythologies are defined by Wikipedia as –
“a collection of myths… [and] stories … [that] explain nature, history, and customs.”
Make no mistake: our children’s books are all myths that serve to teach children how they should behave in society. When my son reads about Curious George, he learns that unrestrained curiosity can lead you into danger, but also to unexpected rewards. When he reads about Hensel and Gretel, he learns of the dangers of trusting strangers and step-moms. Even fantasy books teach us myths about the value of wisdom, physical prowess and even beauty as the tall, handsome prince saves the day. Myths are perpetuated everywhere in culture, and are constantly strengthened in our minds through the media.
What can we say about medical mistakes in the Myth level? Inayatullah believes that the deepest problem, immortalized in myth throughout the last two millennia, is that “the doctor knows best”. Patients are taught from a very young age that the physician’s verdict is more important than their own thoughts and feelings, and that they should not argue against it.
While I see the point in Inayatullah’s view, I’m not as certain that it is the reason behind medical mistakes. Instead, I would add a partner-myth: “the human doctor knows best”. This myth is spread to medical doctors in many institutes, and makes it more difficult to them to rely on computerized analysis, or even to consider that as human beings they’re biased by nature.
Consolidating the Layers
As you may have realized by now, CLA is not used to forecast one accurate future, but is instead meant to deepen our thinking about potential futures. Any discussion about long-term issues should open with an analysis of those issues in each of the four layers, so that the solutions we propose – i.e. the alternative futures – can deal not only with the superficial aspects of the issue, but also with the deeper causes and roots.
Conclusion
The Failure of Myth – i.e. our difficulty to realize that the future will not only change technologically, but also in the myths and worldviews we hold – is impossible to counter completely. We can’t know which myths will be promoted by future generations, just as we can’t forecast scientific breakthroughs fifty years in advance.
At most, we can be aware of the existence of the Failure of Myth in every discussion we hold about the future. We must assume, time after time, that the myths of future generations will be different from ours. My grandchildren may look at their meat-eating grandfather in horror, or laugh behind his back at his pants and shirt – while they walk naked in the streets. They may believe that complicated decisions should be left solely to computers, or that physical work should never be performed by human beings. These are just some of the possible myths that future generations can develop for themselves.
In the next blog post, I’ll go over the issue of aerial drones use for terrorist attacks, and analyze it by using CLA to identify a few possible myths and worldviews that we may need to change in order to deal with this threat.