Should You Consider Fate when Planning Ahead?

I was recently asked on Quora whether there is some kind of a grand scheme to things: a destiny that we all share, a guiding hand that acts according to some kind of moral rules.

This is a great question, and one that we’re all worried about. While there’s no way to know for sure, the evidence points against this kind of fate-biased thinking – as a forecasting experiment funded by the US Department of Defense recently showed.

In 2011, the US Department of Defense began funding an unusual project: the Good Judgement Project. In this project, led by Philip E. Tetlock, Barbara Mellers and Don Moore, people were asked to volunteer their time and rate the chance of occurence for certain events. Overall, thousands of people took part in the exercise, and answered hundreds of questions over a time period of two years. Their answers were checked constantly, as soon as the events actually occurred.

After two years, the directors of the project identified a sub-type of people they called Superforecasters. These top forecasters were doing so well, that their predictions were 30% more accurate than those of intelligence officials who had access to highly classified information!

(and yes, for the statistics-lovers among us: the researchers absolutely did run statistical tests that showed the chances of those people being accidentally so accurate were miniscule. The superforecasters kept doing well, over and over again)

Once the researchers identified this subset of people, they began analyzing their personalities and methods of thinking. You can read about it in some of the papers about the research (attached at the end of this answer), as well as in the great book – Superforecasting: the Art and Science of Prediction. For this answer, the important thing to note is that those superforecasters were also tested for what I call “the fate bias”.

Neither one seems to work. Sorry ’bout that.

The Fate Bias

There’s no denying that most people believe in fate of some sort: a guiding hand that makes everything happen for a reason, in accordance with some grand scheme or moral rules. This tendency seems to manifest itself most strongly in children, and in God-believers (84.8 percent of whom believe in fate), but even 54.3 percent of atheists believe in fate.

It’s obvious why we want to believe in fate. It gives our woes, and the sufferings of others, a special meaning. It justifies our pains, and makes us think that “it’s all for a reason”. Our belief in fate helps us deal with bereavement and with physical and mental pain.

But it also makes us lousy forecasters.

 

Fate is Incompatible with Accurate Forecasting

In the Good Judgement Project, the researchers ran tests on the participants to check for their belief in fate. They found out that the superforecasters utterly rejected fate. Even more significantly, the better an individual was at forecasting, the more inclined he was to reject fate. And the more he rejected fate, the more accurate he was at forecasting the future.

 

Fate is Incompatible with the Evidence

And so, it seems that fate is simply incompatible with the evidence. People who try to predict the occurrence of events in a ‘fateful’ way, as if they obeying a certain guiding hand, are prone to failure. On the other hand, those who believe there is no ‘higher order to things’ and plan accordingly, turn out to be usually right.

Does that mean there is no such thing as fate, or a grand scheme? Of course not. We can never disprove the existence of such a ‘grand plan’. What we can say with some certainty, however, is that human beings who claim to know what that plan actually is, seem to be constantly wrong – whereas those who don’t bother explaining things via fate, find out that reality agrees with them time and time again.

So there may be a grand plan. We may be in a movie, or God may be looking down on us from up above. But if that’s the case, it’s a god we don’t understand, and the plan – if there actually is one – is completely undecipherable to us. As Neil Gaiman and the late Terry Pratchett beautifully wrote –

God does not play dice with the universe; He plays an ineffable game of His own devising… an obscure and complex version of poker in a pitch-dark room, with blank cards, for infinite stakes, with a Dealer who won’t tell you the rules, and who smiles all the time.

And if that’s the case, I’d rather just say outloud – “I don’t believe in fate”, and plan and invest accordingly.

You’ll simply have better success that way. And when the universe is cheating at poker with blank cards, Heaven knows you need all the help you can get.

 


 

For further reading, here are links to some interesting papers about the Good Judgement Project and the insights derived from it –

Bringing probability judgments into policy debates via forecasting tournaments

Superforecasting: How to Upgrade Your Company’s Judgment

Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions

Psychological Strategies for Winning a Geopolitical Forecasting Tournament

Rethinking the training of intelligence analysts

 

Things I’ve Learned as ISIS’ Chief Technology Officer; Or – Why ISIS Loves Trump

A few months ago I received a tempting offer: to become ISIS’ chief technology officer.

How could I refuse?

Before you pick up the phone and call the police, you should know that it was ‘just’ a wargame, initiated and operated by the strategical consulting firm Wikistrat. Many experts on ISIS and the Middle East in general have taken part in the wargame, and have taken roles in some of the sides that are waging war right now on Syrian soil – from Syrian president Bashar al-Assad, to the Western-backed rebels and even ISIS.

This kind of wargames is pretty common in security organizations, in order to understand what the enemy thinks like. As Harper Lee wrote, “You never really understand a man… until you climb into his skin and walk around in it.”

And so, to understand ISIS, I climbed into its skin, and started thinking aloud and discussing with my ISIS teammates what we could do to really overwhelm our enemies.

But who are those enemies?

In one word, everyone.

This is not an overestimate. Abu Bakr al-Baghdadi, the leader of ISIS and its self-proclaimed caliph, has warned Muslims in 2015 that the organization’s war is – “the Muslims’ war altogether. It is the war of every Muslim in every place, and the Islamic State is merely the spearhead in this war.”

Other spiritual authorities who help explain ISIS’ policies to foreigners and potential converts, agree with Baghdadi. The influential Muslim preacher Abu Baraa, has similarly stated that “the world is divided into two camps. Make sure you are on the side of the Muslims. You shouldn’t be on the side of the infidels, nor should you be on the fence, neutral…”

This approach is, of course, quite comfortable for ISIS, since the organization needs to draw as many Muslims as possible to its camp. And so, thinking as ISIS, we realized that we must find a way to turn this seemingly-small conflict of ours into a full-blown religious war: Muslims against everyone else.

Unfortunately, it seems most Muslims around the world do not agree with those ideas.

How could we convince them into accepting the truth of the global religious war?

It was obvious that we needed to create a fracture between the Muslim and Christian world, but world leaders weren’t playing to our tune. The last American president, Barack Obama, fiercely refused to blame Islam for terror attacks, emphasizing that “We are not at war with Islam.”

French president Francois Hollande was even worse for our cause: after an entire summer of terror attacks in France, he still refused to blame Islam. Instead, he instituted a new Foundation for Islam in France, to improve relations with the nation’s Muslim community.

The situation was clearly dire. We needed reinforcements in fighters from Western countries. We needed Muslims to join us, or at the very least rebel against their Western governments, but very few were joining us from Europe. Reports put the number of European Muslims joining ISIS at barely 4,000, out of 19 million Muslims living in Europe. That means just 0.02% of the Muslim population actually cared enough about ISIS to join us!

Things were even worse in the USA, in which, according to the Pew Research Center, Muslims were generally content with their lives. They were just as likely as other Americans to have earned college degrees and attended graduate schools, and to report household incomes of $100,000 or more. Nearly two thirds of Muslims stated that they “do not see a conflict between being a devout Muslim and living in a modern society”. Not much chance to incite a holy war there.

So we agreed on trying the usual things: planning terror attacks, making as much noise as we possibly could, keep on the fight in the Middle East and recruiting Muslims on social media. But we realized that things really needed to change if radical Islam were to have any chance at all. We needed a new kind of world leader: one who would play by our ideas of a global conflict; one who would close borders for Muslims, and make Muslim immigrants feel unwanted in their countries; one who would turn a deaf ear to the plea of refugees, simply because they came from Muslim countries.

After a single week in ISIS, it was clear that the organization desperately need a world leader who thinks and acts like that.

Do you happen to know someone who might fit that bill?

donald_trump_rips_cnn8217s_chris-8ff4094ef9922bde598f212bb5bd485b

When Reality Changes More Quickly than Science Fiction

Brandon Sanderson is one of my favorite fantasy and science fiction authors. He is producing new books in an incredible pace, and his writing quality does not seem to suffer for it. The first book in his recent sci-fi trilogy, Steelheart from The Reckoners series, was published in September 2013. Calamity, the third and last book in the same series was published in February 2016. So just three years passed between the first and the last book in the series.

thereckonersseries.jpg
The Reckoners trilogy. Source: Brittany Zelkovich

The books themselves describe a post-apocalyptic future, around ten years away from us. In the first book, the hero lives in the most technologically advanced cities in the world, with electricity, smartphones, and sophisticated technology at his disposal. Sanderson describes sophisticated weapons used by the police forces in the city, including laser weapons and even mechanized war suits. By the third book, our hero reaches another technologically-advanced outpost of humanity, and suddenly is surrounded by weaponized aerial drones.

You may say that the first city chose not to use aerial drones, but that explanation is a bit sketchy, as anyone who has read the books can testify. Instead, it seems to me that in the three years that passed since the original book was published, aerial drones finally made a large enough impact on the general mindset, that Sanderson could no longer ignore them in his vision of a future. He realized that his readers would look askance at any vision of the future that does not include mention of aerial drones of some kind. In effect, the drones have become part of the way we think about the future. We find it difficult to imagine a future without them.

Usually, our visions of the future change relatively slowly and gradually. In the case of the drones, it seems that within three years they’ve moved from an obscure technological item to a common myth the public shares about the future.

Science fiction, then, can show us what people in the present expect the future to look like. And therein lies its downfall.

 

Where Science Fiction Fails

Science fiction can be used to help us explore alternative futures, and it does so admirably well. However, best-selling books must reach a wide audience, and to resonate with many on several different levels. In order to do that, the most popular science fiction authors cannot stray too far from our current notions. They cannot let go of our natural intuitions and core feelings: love, hate, the appreciation we have for individuality, and many others. They can explore themes in which the anti-hero, or The Enemy, defy these commonalities that we share in the present. However, if the author wants to write a really popular book, he or she will take care not to forego completely the reality we know.

Of course, many science fiction book are meant for ‘in-house’ audience: for the hard-core sci-fi audience who is eager to think beyond the box of the present. Alastair Reynolds in his Revelation Space series, for example, succeeds in writing sci-fi literature for this audience exactly. He’s writing stories that in many aspects transcend notions of individuality, love and humanity. And he’s paying the price for this transgression as his books (to the best of my knowledge) have yet to appear on the New York Times Best Seller list. Why? As one disgruntled reviewer writes about Reynolds’ book Chasm City

“I prefer reading a story where I root for the protagonist. After about a third of the way in, I was pretty disturbed by the behavior of pretty much everyone.”

Chasm_City_cover_(Amazon).jpg

Highly popular sci-fi literature is thus forced to never let go completely of present paradigms, which sadly limits its use as a tool to developing and analyzing far-away futures. On the other hand, it’s conceivable that an annual analysis of the most popular sci-fi books could provide us with an understanding of the public state-of-mind regarding the future.

Of course, there are much easier ways to determine how much hype certain technologies receive in the public sphere. It’s likely that by running data mining algorithms on the content of technological blogs and websites, we would reach better conclusions. Such algorithms can also be run practically every hours of every day. So yeah, that’s probably a more efficient route to figuring out how the public views the future of technology.

But if you’re looking for an excuse to read science fiction novels for a purely academic reason, just remember you found it in this blog post.

 

 

The Future of Genetic Engineering: Following the Eight Pathways of Technological Advancement

The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?

In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.

flash foresight

Eight Pathways of Technological Advancement

Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.

Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.

First Pathway: Dematerialization

By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.

Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.

Second Pathway: Virtualization

Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.

crash dummies
Thanks to virtualization, crash dummies everywhere can relax. Image originally from @TheCrashDummies.

Third Pathway: Mobility

Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.

Fourth Pathway: Product Intelligence

This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

Fifth Pathway: Networking

The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.

Sixth Pathway: Interactivity

As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.

Seventh Pathway: Globalization

Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.

Eighth Pathway: Convergence

Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.

 

How could genetic engineering progress along the Eight Pathways of technological improvement?

 

Pathways for Genetic Engineering

First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.

As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.

6.png
A screenshot from Genome Compiler

How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.

If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.

Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.

Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.

Well, then, what would a convergence of all the above pathways look like?

 

The Convergence of Genetic Engineering

Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.

What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.

 

Conclusion

If you’ve been reading this blog for some time, you may have noticed a recurring pattern: I’ll be inquiring into a certain subject, and then analyzing it according to a certain foresight methodology. Such posts have covered so far the Business Theory of Disruption (used to analyze the future of collectible card games), Causal Layered Analysis (used to analyze the future of aerial drones and of medical mistakes) and Pace Layer Thinking. I hope to go on giving you some orderly and proven methodologies that help thinking about the future.

How you actually use these methodologies in your business, class or salon talk – well, that’s up to you.

 

 

When the Marine Corps is Using Science Fiction to Prepare for the Future

When most of us think of the Marine Corps, we usually imagine sturdy soldiers charging headlong into battle, or carefully sniping at an enemy combatant from the tops of buildings. We probably don’t imagine them reading – or writing – science fiction. And yet, that’s exactly what 15 marines are about to do in two weeks from now.

The Marine Corps Warfighting Lab (I bet you didn’t know they have one) and The Atlantic Council are holding a Science Fiction Futures Workshop in early February. And guess what? They’re looking for “young, creative minds”. You probably have to be marines, but even if you aren’t – maybe you’ll have a chance if you submit your application as well.

marines futures workshop

 

Why “Magic: the Gathering” is Doomed: Lessons from the Business Theory of Disruption

Twenty years ago, when I was young and beautiful, I picked up a wrapped pack of cards in a computer games store, and read for the first time the tag “Magic: the Gathering”. That was the beginning of my long-time romance with the collectible card game. I imported the game to Israel, translated the rules leaflet to Hebrew for my friends, and went on to play semi-professionally for twenty years, up to the point when I became the Israeli champion. The game has pretty much shaped my years as a teenager, and has helped me make friends and meet interesting people from all over the world.

That is why it’s so sad to me to see the state of the game right now, and realize that it is almost certainly doomed to fail in the long run.

 

this-is-why-you-should-be-playing-magic-the-gathering-400903
Magic: The Gathering. The game that has bankrupt thousands of parents.

 

The Rise and Decline of Magic the Gathering

Make no mistake: Magic the Gathering (just Magic in short) is still the top dog among collectible card games in the physical world. According to a report released in 2014, the annual revenue from Magic has grown by 182% between 2009 and 2014, reaching a total value of around $250 million a year. That’s a lot of money, to be sure.

The only problem is that Hearthstone, a digital card game released in the beginning of 2014, has reached annual revenues of around $240 million, in less than two years. I will not be surprised to see the numbers growing even larger that in the future.

This is a bizarre situation. Wizards of the Coast (WotC), the company behind Magic, had twenty years to take the game online and turn it into a success. They failed miserably, and their meager attempts at became a target for scorn and ridicule from players worldwide. While WotC did create an online platform to play Magic on, there were plenty of complaints: for starters, playing was extremely costly since the virtual card packs generally cost the same as packs in the physical world. An evening of playing a draft – a small tournament with only eight players – would’ve cost each player around ten dollars, and would’ve required a time investment of up to four straight hours, much of it wasted in waiting for the other players in the tournament to finish their matches with each other and move on to the next round.

These issues meant that Magic Online was mostly reserved for the top players, who had the money and the willingness to spend it on the game. WotC was aware of the disgruntlement about the state of things, but chose to do nothing – after all, it had no real contenders in the physical or the digital market. What did it have to fear? It had no real reason to change. In fact, the only smart decision WotC managers could take was NOT to take a risk and try to change the online experience, but to keep on making money – and lots of it – from a game that functioned well enough. And they could continue doing so until their business was rudely and abruptly disrupted.

 

The Business Theory of Disruption

The theory of disruption was originally conceived by Harvard Business School professor Clayton M. Christensen, and described in his best-selling book The Innovator’s Dilemma. Christensen has followed the evolution of several industries, particularly hard drives, but also including metalworking, retail stores and tractors. He found out that in each sector, the managers supported research and development, but all that R&D produced only two general kinds of innovations: sustaining innovations and disruptive ones.

41lZYsjj6dL._SX331_BO1,204,203,200_

The sustaining innovations were generally those that the customers asked for: increasing hard drive storage capacity, or making data retrieval faster. They led to obvious improvements, which brought immediate and clear benefit to the company in a competitive market.

The disruptive innovations, on the other hand, were those that completely changed the picture, and actually had a good potential to cost the company money in the short-term. Furthermore, the customers saw little value in them, and so the managers saw no advantage in pursuing these innovations. The company-employed engineers who came up with the ideas for disruptive innovations simply couldn’t find support for them in the company.

A good example for the process of disruption is that of the hard drive industry, a few years before the transition from 8-inch drives to 5.25-inch drives occurred. A quick look at the following parameters of the two contenders, back in 1981, explains immediately why managers in the 8-inch drive manufacturing companies were wary of switching over to the 5.25-inch drive market. The 5.25-inch drives were simply inefficient, and lost the competition with 8-inch drives in almost every parameter, except for their size! And while size is obviously important, the computer market at the time consisted mainly of “minicomputers” – computers that cost ~$25,000, and were the size of a small refrigerator. At that size, the physical volume of the hard drives was simply irrelevant.

Attribute 8-Inch Drives (Minicomputer Market) 5.25-Inch Drives (Desktop Computer Market)
Capacity (megabytes) 60 10
Physical volume (cubic inches) 566 150
Weight (pounds) 21 6
Access time (milliseconds) 30 160
Cost per megabyte $50 $200
Unit cost $3,000 $2,000

The table has been copied from the book The Innovator’s Dilemma by Clayton M. Christensen.

And so, 8-inch drive companies continued to focus on 8-inch drives, while a few renegade engineers opened new companies and worked hard on developing better 5.25-inch drives. In a few years, the 5.25-inch drives were just as efficient as the 8-inch drives, and a new market formed: that of the personal desktop computer. Suddenly, every computer maker in the market needed 5.25-inch drives.

Pdp-11-40
One of the first minicomputers. On display at the Vienna Technical Museum. Image found on Wikipedia.

Now, the 8-inch drive company managers were far from stupid or ignorant. When they saw that there was a market for 5.25-inch drives, they decided to leap on the opportunity as well, and develop their own 5.25-inch drives. Sadly, they were too late. They discovered that it takes time and effort to become acquainted with the demands of the new market, to adapt their manufacturing machinery and to change the entire company’s workflow in order to produce and supply computer makers with 5.25 drives. They joined the competition far too late, and even though they were the leviathans of the industry just ten years ago, they soon sunk to the bottom and were driven out of business.

What happened to the engineers who drove forward the 5.25-inch drives revolution, you may ask? They became CEOs of the new 5.25-inch drive manufacturing companies. A few years later, when their own young engineers came to them and suggested that they invest in developing the new and faulty 3.5-inch drives, they decided that there was no market for this invention right now, no demand for it, and that it’s too inefficient anyway.

Care to guess what happened next? Ten years later, the 3.5-inch drives took over, portable computers utilizing them were everywhere, and the 5.25-inch drive companies crumbled away.

That is the essence of disruption: decisions that make sense in the present, are clearly incorrect in the long term, when markets change. Companies that relax and only invest in sustaining innovations instead of trying to radically change their products and reshape the markets themselves, are doomed to fail. In Peter Diamandis words –

“If you aren’t disrupting yourself, someone else is.”

Now that you understand the basics of the Theory of Disruption, let’s see how it applies to Magic.

 

Magic and Disruption

Wizards of the Coast has been making almost exclusively sustaining improvements over the last twenty years: its talented R&D team focused almost exclusively on releasing new expansions with new cards and new playing mechanics. WotC also tried to disrupt themselves once by creating the Magic Online platform, but failed to support and nurture this disruptive innovation. The online platform remained mainly as an outdated relic – a relic that made money, to be sure, but was slowly becoming irrelevant in the online world of collectible card games.

In the last five years, many other collectible card games reared their heads online, including minor successes like Shadow Era (200,000 players, ~$156,000 annual revenue) and Urban Rivals (estimated ~$140,000 annual revenue). Each of the above made discoveries in the online world: they realized that players need to be offered cards for free, that they need to be lured to play every day, and that the free-to-play model can still prove profitable since the company’s costs are close to zero: the firm doesn’t need to physically print new cards or to distribute them to retailers. But these upstarts were still so small that WotC could afford to effectively ignore them. They didn’t pose a real threat to Magic.

Then Hearthstone burst into existence in 2014, and everything changed.

unnamed.png

Hearthstone’s developers took the best traits of Magic and combined it with all the insights the online gaming industry has developed over recent years. They made the game essentially free to play to attract a large number of players, understanding that their revenues would come from the small fraction of players who spent some money on the game. They minimized time waste by setting a time limit on every player’s turn, and by establishing a rule that players can only act during their own turn (so there’s no need to wait for the other player’s response after every move). They even broke down the Magic draft tournaments of eight people, and made it so that every player who drafted a deck can now play against any other player who drafted a deck at any time. There’s no time waste in Hearthstone – just games to play and fun to be had.

WotC was still deep asleep at that time. In July 2014, Magic brand manager Liz Lamb-Ferro told GamesBeat that –

“If you’re looking for just that immediate face-to-face, back-and-forth action-based game with not a lot of depth to it, then you can find that. … But if you want the extras … then you’re eventually going to find your way to Magic.”

Lamb-Ferro was right – Hearthstone IS a simpler game – but that simplicity streamlines gameplay, and thus makes the game more rapid and enjoyable to many players. And even if we were to accept that Hearthstone does not attract veteran players who “want the extras” (actually, it does), WotC should have realized that other online collectible card games would soon combine Magic’s sophistication with Hearthstone’s mechanisms for streamlining gameplay. And indeed, in 2014 a new game – SolForge – has taken all of the strengths of Hearthstone, while adding a mechanic of card transformation (each card transforming into three different versions of itself) that could only have been possible in card games played online. SolForge doesn’t even have a physical version and could never have one, and the game is already costing Magic a few more veteran players.

This is the point when WotC began realizing that they’re falling far behind the curve. And so, in the middle of 2015 they have released Duels of the Planeswalkers 2016. I won’t even bother detailing all the infuriating problems with the game. Suffice it to say that it has garnered more negative reviews than positive ones, and made clear that WotC were still lagging far behind their competitors in their understanding of the virtual world, user experience, and what players actually want. In short, WotC found themselves in the position of the 8-inch drive manufacturers, realizing suddenly that the market has changed under their noses in less than two years.

 

What Could WotC do?

The sad truth is that WotC can probably do nothing right now to fix Magic. The firm can continue churning out sustaining improvements – new expansions and new exciting cards – but it will find itself hard pressed to take over the digital landscape. Magic is a game that was designed for the physical world, and not for the current frenzied pace of the virtual collectible card games. Magic simply isn’t suitable for the new market, unless WotC changes the rules so much that it’s no longer the same game.

Could WotC change the rules in such a dramatic fashion? Yes, but at a great cost. The company could recreate the game online with new cards and rules, but it would have to invest time and effort in relearning the workings of the virtual world and creating a new platform for the revised Magic. Unfortunately, it’s not clear that WotC will have time to do that with Hearthstone, SolForge and a horde of other card games snarling at its heels. The future of Magic online does not look bright, to say the least.

Does that mean Magic the Gathering will vanish completely? Probably not. The Magic brand is still strong everywhere except for the virtual world, which means that in the next five years the game will remain in existence mostly in the physical world, where it will bring much joy to children in school breaks, and much money to the pockets of WotC. During these five years, WotC will have the opportunity to rethink and recreate the game for the next big market: virtual and augmented reality. If the firm succeeds in that front, it’s possible that Magic will be reinvented for the new-new market. If it fails and elects to keep the game anchored only in the physical world, then Magic will slowly but surely vanish away as the market changes and new and exciting games take over the attention span of the next generation.

That’s what happens when you disregard the Theory of Disruption.

 

Are we Entering the Aerial Age – or the Age of Freedom?

A week ago I covered in this blog the possibility of using aerial drones for terrorist attacks. The following post dealt with the Failure of Myth and covered Causal Layered Analysis (CLA) – a futures studies methodology meant to counter the Failure of Myth and allow us to consider alternative futures radically different from the ones we tend to consider intuitively.

In this blog post I’ll combine insights from both recent posts together, and suggest ways to deal with the terrorism threat posed by aerial drones, in four different layers suggested by CLA: the Litany, the Systemic view, the Worldview, and the Myth layer.

To understand why we have to use such a wide-angle lens for the issue, I would compare the proliferation of aerial drones to another period in history: the transition between the Bronze Age and the Iron Age.

 

From Bronze to Iron

Sometime around 1300 BC, iron smelting was discovered by our ancient forefathers, assumedly in the Anatolia region. The discovery rapidly diffused to many other regions and civilizations, and changed the world forever.

If you ask people why iron weapons are better than bronze ones, they’re likely to answer that iron is simply stronger, lighter and more durable than bronze. However, the truth is that bronze weapons are not much more efficient than iron weapons. The real importance of iron smelting, according to “A Short History of War” by Richard A. Gabriel and Karen S. Metz, is this:

“Iron’s importance rested in the fact that unlike bronze, which required the use of relatively rare tin to manufacture, iron was commonly and widely available almost everywhere… No longer was it only the major powers that could afford enough weapons to equip a large military force. Now almost any state could do it. The result was a dramatic increase in the frequency of war.”

It is easy to imagine political and national leaders using only the first and second layer of CLA – the Litany and the Systemic view – at the transition from the Bronze to the Iron Age. “We should bring these new iron weapons to all our soldiers”, they probably told themselves, “and equip the soldiers with stronger shields that can deflect iron weapons”. Even as they enacted these changes in their armies, the worldview itself shifted, and warfare was vastly transformed because of the large number of civilians who could suddenly wield an iron weapon. Generals who thought that preparing for the change merely meant equipping their soldiers with an iron weapon, found themselves on the battlefield facing armies much larger than their own, because of new conscription models that their opponents had developed.

Such changes in warfare and in the existing worldview could have been realized in advance by utilizing the third and fourth layers of CLA – the Worldview and the Myth.

Aerial drones are similar to Iron Age weapons in that they are proliferating rapidly. They can be built or purchased at ridiculously low prices, by practically everyone. In the past, only the largest and most technologically-sophisticated governments could afford to employ aerial drones. Nowadays, every child has them. In other words, the world itself is turning against everything we thought we knew about the possession and use of unmanned aerial vehicles. Such dramatic change – that our descendants may yet come to call The Aerial Age when they look back in history – forces us to rethink everything we knew about the world. We must, in short, analyze the issue from a wide-angle view, with an emphasis on the third and fourth layer of CLA.

How, then, do we deal with the threat aerial drones pose to national security?

 

First Layer: the Litany

The intuitive way to deal with the threat posed by aerial drones, is simply to reinforce the measures and we’ve had in place before. Under the thinking constraints of the first layer, we should basically strive to strengthen police forces, and to provide larger budgets for anti-terrorist operations. In short, we should do just as we did in the past, but more and better.

It’s easy to see why public systems love the litany layer, since these measures create reputation and generate a general feeling that “we’re doing something to deal with the problem”. What’s more, they require extra budget (to be obtained from congress) and make the organization larger along the way. What’s there not to like?

Second Layer: the Systemic View

Under the systemic view we can think about the police forces, and the tools they have to deal with the new problem. It immediately becomes obvious that such tools are sorely lacking. Therefore, we need to improve the system and support the development of new techniques and methodologies to deal with the new threat. We might support the development of anti-drone weapons, for example, or open an entirely new police department dedicated to dealing with drones. Police officers will be trained to deal with aerial drones, so that nothing is left for chance. The judicial and regulatory systems are lending themselves to the struggle at this layer, by issuing highly-regulated licenses to operate aerial drones.

 

ray-gun.jpg
An anti-drone gun. Originally from BattelleInnovations and downloaded from TechTimes

 

Again, we could stop the discussion here and still have a highly popular set of solutions. As we delve deeper into the Worldview layer, however, the opposition starts building up.

Third Layer: the Worldview

When we consider the situation at the worldview layer, we see that the proliferation of aerial drones is simply a by-product of several technological trends: miniaturization and condensation of electronics, sophisticated artificial intelligence (at least in terms of 20-30 years ago) for controlling the rotor blades, and even personalized manufacturing with 3D-printers, so that anyone can construct his or her own personal drone in the garage. All of the above lead to the Aerial Age – in which individuals can explore the sky as they like.

 

2EDCBCF500000578-3336860-image-a-88_1448662571275.jpg
Exploration of the sky is now in the hands of individuals. Image originally from DailyMail India.

 

Looking at the world from this point of view, we immediately see that the vast expected proliferation of aerial drones in the near decade would force us to reconsider our previous worldviews. Should we really focus on local or systemic solutions, rather than preparing ourselves for this new Aerial Age?

We can look even further than that, of course. In a very real way, aerial drones are but a symptom of a more general change in the world. The Aerial Age is but one aspect of the Age of Freedom, or the Age of the Individual. Consider that the power of designing and manufacturing is being taken from nations and granted to individuals via 3D-printers, powerful personal computers, and the internet. As a result of these inventions and others, individuals today hold power that once belonged only to the greatest nations on Earth. The established worldview, in which nations are the sole holders of power is changing.

When one looks at the issue like this, it is clear that such a dramatic change can only be countered or mitigated by dramatic measures. Nations that want to retain their power and prevent terrorist attacks will be forced to break rules that were created long ago, back in the Age of Nations. It is entirely possible that governments and rulers will have to sacrifice their citizens’ privacy, and turn to monitoring their citizens constantly much as the NSA did – and is still doing to some degree. When an individual dissident has the potential to bring harm to thousands and even millions (via synthetic biology, for example), nations can ill afford to take any chances.

What are the myths that such endeavors will disrupt, and what new myths will they be built upon?

Fourth Layer: the Myth

I’ve already identified a few myths that will be disrupted by the new worldview. First and foremost, we will let go of the idea that only a select few can explore the sky. The new myth is that of Shared Sky.

The second myth to be disrupted is that nations hold all the technological power, while terrorists and dissidents are reduced to using crude bombs at best, or pitchforks at worst. This myth is no longer true, and it will be replaced by a myth of Proliferation of Technology.

The third myth to be dismissed is that governments can protect their citizens efficiently with the tools they have in the present. When we have such widespread threats in the Age of Freedom, governments will experience a crisis in governance – unless they turn to monitoring their citizens so closely that any pretense of privacy is lost. And so, it is entirely possible that in many countries we will see the emergence of a new myth: Safety in Exchange for Privacy.

 

Conclusion

Last week I’ve analyzed the issue of aerial drones being used for terrorist attacks, by utilizing the Causal Layered Analysis methodology. When I look at the results, it’s easy to see why many decision makers are reluctant to solve problems at the third and fourth layer – Worldview and Myth. The solutions found in the lower layers – the Litany and the Systemic view – are so much easier to understand and to explain to the public. Regardless, if you want to actually understand the possibilities the future holds in any subject, you must ignore the first two layers in the long term, and focus instead on the large picture.

And with that said – happy new year to one and all!

The Failure of Myth and the Future of Medical Mistakes

 

Please note: this is another chapter in a series of blog posts about Failures in Foresight. You may want to also read the other blog posts dealing with the Failure of Nerve, the Failure of the Paradigm, and the Failure of Segregation.

 

At the 1900 World Exhibition in Paris, French artists made an attempt to forecast the shape of the world in 2000. They produced a few dozens of vivid and imaginative drawings (clearly they did not succumb to the Failure of the Paradigm!)

Here are a few samples from the World Exhibition. Can you tell what all of those have in common with each other?

military-cycles-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Police motorcycles in the year 2000
skype-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Skype in the year 2000
phonographs-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Phonecalls and radio in the year 2000
birding-what-1900-french-artists-thought-the-year-200-would-be-like.jpg
Fishing for birds in the year 2000

 

Psychologist Daniel Gilbert wrote about similar depictions of the future in his book “Stumbling on Happiness”

“If you leaf through a few of them, you quickly notice that each of these books says more about the times in which it was written than about the times it was meant to foretell.”

You only need to take another look at the images to convince yourselves of the truth of Gilbert’s statement. The women and men are dressed in the same way they were dressed in 1900, except for when they go ‘bird hunting’ – in which case the gentlemen wear practical swimming suits, whereas the ladies still stick with their cumbersome dresses underwater. Policemen still employ swords and brass helmets, and of course there are no policewomen. Last but not least, it seems that the future is entirely reserved to the Caucasian race, since nowhere in these drawings can you see persons of African or Asian descent.

 

The Failure of Myth

While some of the technologies depicted in these ancient paintings actually became reality (Skype is a nice example), it clear the artists completely failed to capture a larger change. You may call this a change in the zeitgeist, the spirit of the generation, or in the myths that surround our existence and lives. I’ll be calling this A Failure of Myth, and I hope you’ll agree that it’s impossible to consider the future without also taking into account these changes in our mythologies and underlying social and cultural assumptions: men can be equal to women, colored folks have rights similar to white folks, and people of the LGBT have just the same right to exist as heterosexuals. None of these assumptions would’ve been obvious, or included in the myths and stories upon which society is bases, a mere fifty years ago. Today they’re being taken for granted.

 

1013px-USMC-09611.jpg
The myth according to which black people have very few real rights was overturned in the 1960s. Few forecasters thoguht of such an occurence in advance.

 

Could we ever have forecast these changes?

Much as in the Failure of the Paradigm, I would posit that we could never accurately forecast the future ways in which myths and culture is about to change. We could hazard some guesses, but that’s just what they are: a guesswork that relies more on our myths in the present, than on solid understanding of the future.

That said, there are certain methodologies used by foresight researchers that could help us at least chart different solutions to problems in the present, in ways that force us to consider our current myths and worldviews – and challenge them when needed. These methodologies allow us to create alternative futures that could be vastly different from the present in the ways that really matter: how people think of themselves, of each other, and of the world around them.

One of the best known methodologies used for this purpose is called Causal Layered Analysis (CLA). It was invented by futures studies expert Sohail Inayatullah, who also describes case studies for using it in his recent book “What Works: Case Studies in the Practice of Foresight”.

In the rest of this blog post, I’ll sum up the practical principles of CLA, and show how they could be used to analyze different issues dealing with the future. Following that, in the next blog post, we’ll take a look again at the issue of aerial drones used for terrorist attacks, and use CLA to consider ways to deal with the threat.

 

Mines_1.jpg
Another Failure of Myth: the ancient Greek could not imagine a future without slavery. None of their great philosophers could escape the myth of slavery. Image originally from Wikipedia

 

 

CLA – Causal Layered Analysis

The core of CLA the idea that every problem can be looked at in four successive layers, each deeper than the previous one. Let’s look at each layer at its turn, and see how each layer adds depth to a discussion about a certain problem: the “high rate of medical mistakes leading to serious injury or death”, as Inayatullah describes in his book. My brief analysis of this problem at every level is almost entirely based on his examples and thoughts.

First Layer: the Litany

The litany is the day-to-day talk. When you’re arguing at dinner parties about the present and the future, you’re almost certainly using the first layer. You’re basically repeating whatever you’ve heard from the media, from the politicians, from thought leaders and from your family. You may make use of data and statistics, but these are only interpreted according to the prevalent and common worldview that most people share.

When we rely on the first layer to consider the issue of medical mistakes, we look at the problem in a largely superficial manner. We can sum the approach in one sentence: “physicians make mistakes? Teach them better, and if they still don’t improve, throw them to jail!” In effect, we’re focusing on the people who are making the mistake – the ones whom it’s so easy to blame. The solutions in this layer are usually short-term solutions, and can be summed up in short sentences that appeal to audiences who share the same worldview.

Second Layer: the Systemic View

Using the systemic view of the second layer, we try to delve deeper into the issue. We don’t blame people anymore (although that does not mean we remove the responsibility to their mistakes from their shoulders), but instead we try to understand how the system itself can contribute to the actions of the individual. To do that we analyze the social, economic and political forces that meld the system into its current shape.

In the case of medical mistakes, the second layer encourages us to start asking tougher questions about the systems under which physicians operate. Could it be, for example, that physicians are rushing their treatments since they are only allowed to talk with each patient 5-10 minutes, as is the custom in many public medical services? Or perhaps the shape of the hospital does not allow physicians to consult easily with each other, thus reaching more solid solutions via teamwork?

The questions asked in the second layer mode of thinking allow us to improve the system itself and make it more efficient. We do not take the responsibility off the shoulders of the individuals, but we do accept that better systems allow and encourage individuals to reach their maximum efficiency.

Third Layer: Worldview

This is the layer where things get hoary for most people. In this layer we try to identify and question the prevalent worldview and how it contributes to the issue. These are our “cognitive lenses” via which we view and interpret the world.

As we try to analyze the issue of medical mistakes in the third layer, we begin to identify the worldviews behind medicine. We see that in modern medicine, the doctor is standing “high above” in the hierarchy of knowledge – certainly much higher than patients. This hierarchy of knowledge and prestige defines the relationship between the physician and the patient. As we understand this worldview, solutions that would’ve fit in the second layer – like the time physicians spend with patients – seem more like a small bandage on a gut wound, than an effective way to deal with the issue.

Another worldview that can be identified and challenges in this layer is the idea that patients actually need to go to clinics or to hospitals for check-ups. In an era of tele-presence and electronics, why not make use of wearable computing or digital doctors to take care of many patients? As we see this worldview and propose alternatives, we find that systemic solutions like “changing the shape of the hospitals” become unnecessary once more.

Fourth Layer: the Myth

The last layer, the myth, deals with the stories we tell ourselves and our children about the world and the ways things work. Mythologies are defined by Wikipedia as –

“a collection of myths… [and] stories … [that] explain nature, history, and customs.”

Make no mistake: our children’s books are all myths that serve to teach children how they should behave in society. When my son reads about Curious George, he learns that unrestrained curiosity can lead you into danger, but also to unexpected rewards. When he reads about Hensel and Gretel, he learns of the dangers of trusting strangers and step-moms. Even fantasy books teach us myths about the value of wisdom, physical prowess and even beauty as the tall, handsome prince saves the day. Myths are perpetuated everywhere in culture, and are constantly strengthened in our minds through the media.

What can we say about medical mistakes in the Myth level? Inayatullah believes that the deepest problem, immortalized in myth throughout the last two millennia, is that “the doctor knows best”. Patients are taught from a very young age that the physician’s verdict is more important than their own thoughts and feelings, and that they should not argue against it.

While I see the point in Inayatullah’s view, I’m not as certain that it is the reason behind medical mistakes. Instead, I would add a partner-myth: “the human doctor knows best”. This myth is spread to medical doctors in many institutes, and makes it more difficult to them to rely on computerized analysis, or even to consider that as human beings they’re biased by nature.

 

Consolidating the Layers

As you may have realized by now, CLA is not used to forecast one accurate future, but is instead meant to deepen our thinking about potential futures. Any discussion about long-term issues should open with an analysis of those issues in each of the four layers, so that the solutions we propose – i.e. the alternative futures – can deal not only with the superficial aspects of the issue, but also with the deeper causes and roots.

 

Conclusion

The Failure of Myth – i.e. our difficulty to realize that the future will not only change technologically, but also in the myths and worldviews we hold – is impossible to counter completely. We can’t know which myths will be promoted by future generations, just as we can’t forecast scientific breakthroughs fifty years in advance.

At most, we can be aware of the existence of the Failure of Myth in every discussion we hold about the future. We must assume, time after time, that the myths of future generations will be different from ours. My grandchildren may look at their meat-eating grandfather in horror, or laugh behind his back at his pants and shirt – while they walk naked in the streets. They may believe that complicated decisions should be left solely to computers, or that physical work should never be performed by human beings. These are just some of the possible myths that future generations can develop for themselves.

In the next blog post, I’ll go over the issue of aerial drones use for terrorist attacks, and analyze it by using CLA to identify a few possible myths and worldviews that we may need to change in order to deal with this threat.

 

Please note: this is another chapter in a series of blog posts about Failures in Foresight. You may want to also read the other blog posts dealing with the Failure of Nerve, the Failure of the Paradigm, and the Failure of Segregation.

Worst-case Technological Scenarios for 2016: from A.I. Disaster to First DIY Pathogen

 

The futurist Ian Pearson, in his fascinating blog The More Accurate Guide to the Future, has recently directed my attention to a new report by Bloomberg Business. Just two days ago, Bloomberg Business published a wonderful short report that identifies ten of the worst-case scenarios for 2016. In order to write the report, Bloomberg’s staff has asked –

“…dozens of former and current diplomats, geopolitical strategists, security consultants, and economists to identify the possible worst-case scenarios, based on current global conflicts, that concern them most heading into 2016.”

I really love this approach, since currently many futurists – particularly the technology-oriented ones – are focusing mainly on all the good that will come to us soon enough. Ray Kurzweil and Tony Seba (in his book Clean Disruption) are forecasting a future with abundant energy; Peter Diamandis believes we are about to experience a new consumerism wave by “the rising billion” from the developing world; Aubrey De-Grey forecasts that we’ll uncover means to stop aging in the foreseeable future. And I tend to agree with them all, at least generally: humanity is rapidly becoming more technologically advanced and more efficient. If these upward trends will continue, we will experience an abundance of resources and a life quality that far surpasses that of our ancestors.

But what if it all goes wrong?

When analyzing the trends of the present, we often tend to ignore the potential catastrophes, the disasters, and the irregularities and ‘breaking points’ that could occur. Or rather, we acknowledge that such irregularities could happen, but we often attempt to focus on the good instead of the bad. If there’s one thing that human beings love, after all, it’s feeling in control – and unexpected events show us the truth about reality: that much of it is out of our hands.

Bloomberg is taking the opposite approach with the current report (more of a short article, really): they have collected ten of the worst-case scenarios that could still conceivably happen, and have tried to understand how they could come about, and what their consequences would be.

The scenarios range widely in the areas they cover, from Putin sidelining America, to Israel attacking Iran’s nuclear facilities, and down to Trump winning the presidential elections in the United States. There’s even mention of climate change heating up, and the impact harsh winters and deadly summers would have on the world.

Strangely enough, the list includes only one scenario dealing with technologies: namely, banks being hit by a massive cyber-attack. In that aspect, I think Bloomberg are shining a light on a very large hole in geopolitical and social forecasting: the fact that technology-oriented futurists are almost never included in such discussions. Their ideas are usually far too bizarre and alienating for the silver-haired generals, retired diplomats and senior consultants who are involved in those discussions. And yet, technologies are a major driving force changing the world. How could we keep them aside?

 

Technological Worse-Case Scenarios

Here are a few of my own worse-case scenarios for 2016, revolving around technological breakthroughs. I’ve tried to stick to the present as much as possible, so there are no scientific breakthroughs in this list (it’s impossible to forecast those), and no “cure to aging” or “abundant energy” in 2016. That said, quite a lot of horrible stuff could happen with technologies. Such as –

  • Proliferation of 3D-printed firearms: a single proficient designer could come up with a new design for 3D-printed firearms that will reach efficiency level comparable to that of mass-manufactured weapons. The design will spread like wildfire through peer-to-peer services, and will lead to complete overhaul of the firearm registration protocols in many countries.
  • First pathogen created by CRISPR technology: biology enthusiasts are now using CRISPR technology – a genetic engineering method so efficient and powerful that ten years ago it would’ve been considered the stuff of science fiction. It’s incredibly easy – at least compared to the past – to genetically manipulate bacteria and viruses using this technology. My worst case scenario in this case is that one bright teenager with the right tools at his hands will create a new pathogen, release it to the environment and worse – brag about it online. Even if that pathogen will prove to be relatively harmless, the mass scare that will follow will stop research in genetic engineering laboratories around the world, and create panic about Do-It-Yourself enthusiasts.
  • A major, globe-spanning A. disaster: whether it’s due to hacking or to simple programming mistake, an important A.I. will malfunction. Maybe it will be one – or several – of the algorithms currently trading at stock markets, largely autonomously since they’re conducting a new deal every 740 nanoseconds. No human being can follow their deals on the spot. A previous disaster in that front has already led in 2012 to one algorithm operated by Knight Capital, purchasing stocks at inflated costs totaling $7 billion – in just 45 minutes. The stock market survived (even if Knight Capital’s stock did not), but what would happen if a few algorithms go out of order at the same time, or in response to one another? That could easily happen in 2016.
  • First implant virus: implants like cardiac pacemakers, or external implants like insulin pumps, can be hacked relatively easily. They do not pack much in the way of security, since they need to be as small and energy efficient as possible. In many cases they are also relying on wireless connection with the external environment. In my worst-case scenario for 2016, a terrorist would manage to hack a pacemaker and create a virus that would spread from one pacemaker to another by relying on wireless communication between the devices. Finally, at a certain date – maybe September 11? – the virus would disable all pacemakers at the same time, or make them send a burst of electricity through the patient’s heart, essentially sending them into a cardiac arrest.

 

This blog post is not meant to create panic or mass hysteria, but to highlight some of the worst-case scenarios in the technological arena. There are many other possible worst-case scenarios, and Ian Perarson details a few others in his blog post. My purpose in detailing these is simple: we can’t ignore such scenarios, or keep on living our lives with the assumption that “everything is gonna be alright”. We need to plan ahead and consider worst-case scenarios to be better prepared for the future.

Do you have ideas for your own technological worst-case scenarios for the year 2016? Write them down in the comments section!

 

Failures in Foresight: The Failure of Segregation

In this post we’ll embark on a journey back in time, to the year 2000, when you were young and eager students. You’re sitting in a lecture given by a bald and handsome futurist. He’s promising to you that within 15 years, i.e. in the year 2015, the exponential growth in computational capabilities will ensure that you will be able to hold a super-computer in your hands.

“Yeah, right,” a smart-looking student sniggers loudly, “and what will we do with it?”

The futurist explains that the future you will watch movies, and hear music with that tiny computer. You exchange bewildered looks with your friends. You all find that difficult to believe in – how can you store large movies on such a small computer? The futurist explains that another trend – that of exponential growth in data storage – will mean that your hand-held super-computer will also store tens of thousands of megabytes.

You see some people in the audience rolling their eyes – promises, promises! Yet you are willing to keep on listening. Of course, the futurist then completely jumps off the cliff of rationality, and promises that in 15 years, everyone will enjoy wireless connectivity almost everywhere, at a speed of tens of megabytes per second.

“That makes no sense.” The smart student laughs again. “Who will ever need such a wireless network? Almost nobody has laptop computers anyway!”

The futurist reminds you that everyone is going to carry super-computers on their bodies in the future. The heckler laughs again, loudly.

 

phone-1031070_1920.jpg
The smartphone: a result of several trends coming into fruition together. Source: Pixabay.

 

The Failure of Segregation

I assume you realize the point by now. The failure demonstrated in this exchange is what I call The Failure of Segregation. It is an incredibly common failure, stemming from our need to focus on only a single trend, and missing the combined and cumulative impacts of two, three or even ten trends at the same time.

In the example above, the forecast made by the futurist would not have been reasonable if only one trend was analyzed. Who needs a superfast Wi-Fi if there aren’t advanced laptops and smartphones to use it? Almost nobody. So from a rational point of view, there’s no reason to invest in such a wireless network. It is only when you consider three trends together – exponential growth in computational capabilities, data storage and wireless network – that you can understand the future.

Every product we enjoy today, is the result of several trends coming into fruition together. Facebook, for example, would not have been nearly as successful if not for these trends –

  1. Exponential growth in computational capabilities, so that nearly everyone has a personal computer.
  2. Miniaturization and mobilization of computers into smartphones.
  3. Exponential improvement of digital cameras, so that every smartphone has a camera today.
  4. Cable internet everywhere.
  5. Wireless internet (Wi-Fi) everywhere.
  6. Cellular internet connections provided by the cellular phone companies.
  7. GPS receiver in every smartphone.
  8. The social trend of people using online social networks.

These are only eight trends, but I’m sure there are many others standing behind Facebook’s success. Only by looking at all eight trends could we have hoped to forecast the future accurately.

Unfortunately, it’s not that easy to look into all the possible trends at the same time.

facebook-time-waste.jpg
Facebook: another result of the aggregation of several trends together. Source: LimeTree Online

A Problem of Complexity

Let’s say that you are now aware of the Failure of Segregation, and so you try to contemplate all of the technological trends together, to obtain a more accurate image of the future. If you try to consider just three technological trends (A, B and C) and the ways they could work together to create new products, you would have four possible results: AB, AC, BC and ABC. That’s not so bad, is it?

However, if you add just one more technological trend to the mix, you’ll find yourself with eleven possible results. Do the calculations yourself if you don’t believe me. The formula is relatively simple, with N being the number of trends you’re considering, and X being the number of possible combinations of trends –

equation2

It’s obvious that for just ten technological trends, there are about a thousand different ways to combine them together. Considering twenty trends will cause you a major headache, and will bring the number of possible combinations up to one million. Add just ten more trends, and you get a billion possible combinations.

To give you an understanding of the complexity of the task on hand, the international consulting firm Gartner has taken the effort to map 37 of the most highly expected technological trends in their Gartner’s 2015 Hype Cycle. I’ll let you do the calculations yourself for the number of combinations stemming from all of these trends.

The problem, of course, becomes even more complicated once you realize you can combine the same two, three or ten technologies to achieve different results. Smart robots (trend A) enjoying machine learning capabilities (trend B) could be used as autonomous cars, or they could be used to teach pupils in class. And of course, throughout this process we pretend to know that said trends will be continue just the way we expect them to – and trends rarely do that.

What you should be realizing by now is that the opposite of the Failure of Segregation is the Failure of Over-Aggregation: trying to look at tens of trends at the same time, even though the human brain cannot hold such an immense variety of resultant combinations and solutions.

So what can we do?

 

Dancing between Failures

Sadly, there’s no golden rule or a simple solution to these failures. The important thing is to be aware of their existence, so that discussions about the future cannot be oversimplified into considering just one trend, detached from the others.

Professional futurists use a variety of methods, including scenario development, general morphological analysis and causal layered analysis to analyze the different trends and attempt to recombine them into different solutions for the future. These methodologies all have their place, and I’ll explain them and their use in other posts in the future. However, for now it should be clear that the incredibly large number of possible solutions makes it impossible to consider only one future with any kind of certainty.

In some of the future posts in this series, I’ll delve deeper into the various methodologies designed to counter the two failures. It’s going to be interesting!