In this post we’ll embark on a journey back in time, to the year 2000, when you were young and eager students. You’re sitting in a lecture given by a bald and handsome futurist. He’s promising to you that within 15 years, i.e. in the year 2015, the exponential growth in computational capabilities will ensure that you will be able to hold a super-computer in your hands.
“Yeah, right,” a smart-looking student sniggers loudly, “and what will we do with it?”
The futurist explains that the future you will watch movies, and hear music with that tiny computer. You exchange bewildered looks with your friends. You all find that difficult to believe in – how can you store large movies on such a small computer? The futurist explains that another trend – that of exponential growth in data storage – will mean that your hand-held super-computer will also store tens of thousands of megabytes.
You see some people in the audience rolling their eyes – promises, promises! Yet you are willing to keep on listening. Of course, the futurist then completely jumps off the cliff of rationality, and promises that in 15 years, everyone will enjoy wireless connectivity almost everywhere, at a speed of tens of megabytes per second.
“That makes no sense.” The smart student laughs again. “Who will ever need such a wireless network? Almost nobody has laptop computers anyway!”
The futurist reminds you that everyone is going to carry super-computers on their bodies in the future. The heckler laughs again, loudly.
The Failure of Segregation
I assume you realize the point by now. The failure demonstrated in this exchange is what I call The Failure of Segregation. It is an incredibly common failure, stemming from our need to focus on only a single trend, and missing the combined and cumulative impacts of two, three or even ten trends at the same time.
In the example above, the forecast made by the futurist would not have been reasonable if only one trend was analyzed. Who needs a superfast Wi-Fi if there aren’t advanced laptops and smartphones to use it? Almost nobody. So from a rational point of view, there’s no reason to invest in such a wireless network. It is only when you consider three trends together – exponential growth in computational capabilities, data storage and wireless network – that you can understand the future.
Every product we enjoy today, is the result of several trends coming into fruition together. Facebook, for example, would not have been nearly as successful if not for these trends –
Exponential growth in computational capabilities, so that nearly everyone has a personal computer.
Miniaturization and mobilization of computers into smartphones.
Exponential improvement of digital cameras, so that every smartphone has a camera today.
Cable internet everywhere.
Wireless internet (Wi-Fi) everywhere.
Cellular internet connections provided by the cellular phone companies.
GPS receiver in every smartphone.
The social trend of people using online social networks.
These are only eight trends, but I’m sure there are many others standing behind Facebook’s success. Only by looking at all eight trends could we have hoped to forecast the future accurately.
Unfortunately, it’s not that easy to look into all the possible trends at the same time.
A Problem of Complexity
Let’s say that you are now aware of the Failure of Segregation, and so you try to contemplate all of the technological trends together, to obtain a more accurate image of the future. If you try to consider just three technological trends (A, B and C) and the ways they could work together to create new products, you would have four possible results: AB, AC, BC and ABC. That’s not so bad, is it?
However, if you add just one more technological trend to the mix, you’ll find yourself with eleven possible results. Do the calculations yourself if you don’t believe me. The formula is relatively simple, with N being the number of trends you’re considering, and X being the number of possible combinations of trends –
It’s obvious that for just ten technological trends, there are about a thousand different ways to combine them together. Considering twenty trends will cause you a major headache, and will bring the number of possible combinations up to one million. Add just ten more trends, and you get a billion possible combinations.
To give you an understanding of the complexity of the task on hand, the international consulting firm Gartner has taken the effort to map 37 of the most highly expected technological trends in their Gartner’s 2015 Hype Cycle. I’ll let you do the calculations yourself for the number of combinations stemming from all of these trends.
The problem, of course, becomes even more complicated once you realize you can combine the same two, three or ten technologies to achieve different results. Smart robots (trend A) enjoying machine learning capabilities (trend B) could be used as autonomous cars, or they could be used to teach pupils in class. And of course, throughout this process we pretend to know that said trends will be continue just the way we expect them to – and trends rarely do that.
What you should be realizing by now is that the opposite of the Failure of Segregation is the Failure of Over-Aggregation: trying to look at tens of trends at the same time, even though the human brain cannot hold such an immense variety of resultant combinations and solutions.
So what can we do?
Dancing between Failures
Sadly, there’s no golden rule or a simple solution to these failures. The important thing is to be aware of their existence, so that discussions about the future cannot be oversimplified into considering just one trend, detached from the others.
Professional futurists use a variety of methods, including scenario development, general morphological analysis and causal layered analysis to analyze the different trends and attempt to recombine them into different solutions for the future. These methodologies all have their place, and I’ll explain them and their use in other posts in the future. However, for now it should be clear that the incredibly large number of possible solutions makes it impossible to consider only one future with any kind of certainty.
In some of the future posts in this series, I’ll delve deeper into the various methodologies designed to counter the two failures. It’s going to be interesting!
Picture from Wikipedia, uploaded by the user Yerevanci
Today I would like to talk (write?) about the first of several different failures in foresight. This first failure – called the Failure of Nerve – had been identified in 1962 by noted futurist and science fiction titan Sir Arthur C. Clarke. While Clarke has mostly pinpointed this failure as a preface for his book about the future, I’ve identified several forces leading to the Failure of Nerve, and discuss ways to circumvent it, in the hope that the astute reader will avoid making similar failures when thinking about the future.
Failure of Nerve
The Failure of Nerve is one of the most frequent of failures when talking or writing about the future, at least in my personal experience. When experts or even laypeople are expressing an opinion about the future, you expect them to be knowledgeable enough to be aware of the facts and the data from the present. And yet, all too often, this expectation is smashed on the hard-rock of mankind’s arrogance. The Failure of Nerve occurs when people are too fearful of looking for answers in the data that surrounds them, and instead focus on repeating their preconceived notions – which might’ve been true in the past, but are no longer relevant in the present.
Examples for Failures of Nerve are sadly abundant. Many quote Simon Newcomb, the famous American astronomer, who declared that flying machines are essentially impossible, a mere two years before the first flight of the Wright brothers –
“The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”
However, this is not a Failure of Nerve, since in Newcomb’s time, the data from the scientific labs themselves was incorrect. As the Wright brothers wrote about their experiments –
“Having set out with absolute faith in the existing scientific data, we were driven to doubt one thing after another, till finally, after two years of experiment, we cast it all aside, and decided to rely entirely upon our own investigations.”
Newcomb’s Failure of Nerve appeared later on, when he was confronted with reports about the Wright brothers’ success. Instead of withholding judgement and checking the data again, Newcomb only conceded that flying machines may have a slight chance of existing, but they could certainly not carry any other human beings other than the pilot.
A similar Failure of Nerve can be found in the words of Napoleon Bonaparte from the year 1800, uttered in reply to news regarding Robert Fulton’s steamboat –
“What, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense.”
Had the uprising emperor bothered to take a better look at the current state of steamboats, he would’ve learned that boats with “bonfires under their decks” were already carrying passengers in the United States, even though the venture was not a commercial success. Fulton went on to construct a steamboat (nicknamed “Fulton’s Folly”) that rose to fame, and in 1816 France finally recovered its senses and purchased a steamboat from Great Britain. Knowing of Napoleon’s genius in warfare, it is an interesting thought exercise to consider how history might have changed had the emperor realized the potential in steamboats when the technology was still emergent.
How do we deal with a Failure of Nerve? To find the answer to that question, we need to understand the forces that make this failure so common.
Behind the Curtains of the Nerve
There are at least three different forces that can contribute to a Failure of Nerve. These are: selective exposure to information, confirmation bias, and last but definitely not least – the conservation of reputation.
The Force of Selective Exposure
Selective exposure to information is something we all suffer from. In this day and age, we have an abundance of information. In the past, news would’ve had taken weeks and months to get to us, and we only had the village elder’s opinion to interpret them for us. Today we’re flooded by information from multiple media sources, each of which with its own not-so-secret agenda. We’re also exposed to columns by social critics and other luminaries, and we can usually tell in advance how they look at things. If you read Tom Friedman’s column, you can be sure he’ll give you the leftist approach. If you open the TV at The Glenn Beck Program, on the other hand, you’ll get the right-winged view.
An abundance of information is all good and well, until you realize that human beings today suffer from a scarcity in attention. They can only focus on one article at a time, and as a result they must choose how to divide their time between competing pieces of information. The easiest choice? Obviously, to go with the news that support your current view on life. And that is indeed the way that many people choose – and understandably results in a Failure of Nerve. How can you be aware of any new information that stands in contradiction to your core beliefs, if you only listen to the people who repeat those same core beliefs?
Philip E. Tetlock, in his new book Superforcasting, tells about Doug Lorch, one of the top forecasters discovered in recent years, who has found a way to circumvent selective exposure, albeit at an effort. In the words of Tetlock (p. 126) –
“Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources – from the New York Times to obscure blogs – that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. … Doug is not merely open-minded. He is actively open-minded.”
Of course, reading opposite views to the one you adhere to can be annoying and vexing, to say the least. And yet, there is no other way to form a more nuanced and solid view of the future.
The Force of Confirmation Bias
Sadly, even when a person chooses to actively open his or her mind to different views, it does not mean that they will be able to assimilate the lessons into their outlook. As human beings, one is wired to –
“…search for, interpret, prefer, and recall information in a way that confirms one’s beliefs or hypotheses while giving disproportionately less attention to information that contradicts it.” – Wikipedia
The confirmation bias is well-known to any expecting future-parent. You walk around in the city, and you find that the street is choke-full of parents with strollers and babies. They are everywhere. You can’t avoid them in the streets, on the bus, and even at work you find that your co-worker had decided to bring her children to the workplace today. So what happened? Has the world’s birth rate doubled itself all of a sudden?
The obvious answer is that we are constantly influenced by confirmation bias. If our mind is constantly thinking about babies, then we’ll pay more attention to any dripping toddler crossing the road, and the memory will be etched much more firmly into our minds.
The confirmation bias does not influence only young parents. It has some real importance in the way we view our world. A study from 2009 demonstrated that when people were asked to read certain articles spend 36 percent more time, on average, reading articles that they agreed with. Another study from 2009 demonstrated that when conservatives are watching The Colbert Report – in which Stephen Colbert satirizes the part of a right-winged news reporter – they read extra-meaning into his words. They claimed that Colbert only pretends to be joking, and actually means what he says on the show.
How does confirmation bias relate to the Failure of Nerve? In a way, it serves to negate some of the bad reputation that the Failure of Nerve has garnered from Clarke. The confirmation bias basically means that unless we make a truly tremendous and conscious attempt to analyze the world around us, our mind will fool us. We’ll pay less attention to evidence that refutes our current outlook, and consider it of lesser importance than other pieces of evidence. Or as the pioneer of the scientific method, Francis Bacon, put it (I found this great quote in a highly recommended blog: You Are Not So Smart) –
“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it.”
Can we fight off the influence of the confirmation bias over our thinking process? We can do that partially, but never completely and it will never be easy. Warren Buffett (third on the list of Forbes’ richest people in the world, and one of the most successful investors in the world) uses two means to tackle the confirmation bias: he specifically looks for dissenters and invites them to speak up, and (assumedly) he’s writing down promptly any piece of evidence that stands in contradiction to his current ideas. In the words of Buffet himself (quoted in TheDataPoint) –
“Charles Darwin used to say that whenever he ran into something that contradicted a conclusion he cherished, he was obliged to write the new finding down within 30 minutes. Otherwise his mind would work to reject the discordant information, much as the body rejects transplants.”
In short, in order to minimize the impact of confirmation bias, you need to remain constantly vigilant against the tendency to be certain of yourself. You must chase after those who disagree with you and seek their opinions actively, and perhaps most importantly: you should write it all down in order to distance yourself from your original perspective, and allow yourself to judge your thinking as though it were someone else’.
The Conservation of Reputation
One of the best known laws in the physical world is the Conservation of Mass. Only slightly less well-known is the law of Conservation of Reputation, which states that the average expert always takes the best of care not to lose face or reputation in his or her dealings with the media. Upton Sinclair had summed up the this law nicely when he wrote –
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Sadly enough, most experts believe that revisions of past forecasts, or indeed any change of opinion at all, will diminish and tarnish their reputation. And so, we can meet experts who will deny reality even when they meet it face-to-face. Some of them are probably blinded by their own big ideas and egos. Others probably choose to conserve what’s left of their reputation and dignity at any cost, even as they see their forecasts shrivel and wither in the light of the present.
The story of Larry Kudlow is particularly prominent in this regard. Kudlow forecast that President George W. Bush’s substantial tax cuts will result in an economic boom. The forecast fell flat, and the economy did not progress as well as it did during President Clinton’s reign. Kudlow did not seem to notice, and declared that the “Bush Rush” is here already. In fact, in 2008 he proclaimed that the current progress of American economy “may be the greatest story never told”. Five months later, Lehman Brothers filed for bankruptcy, and the entire global financial system collapsed along with that of the U.S.
I am going to assume that Kudlow was truly sincere in his proclamations, but obviously many other experts will not feel the need to be as honest, and will adhere to their past proclamations and declarations come hell or high water. And if we’re totally honest, then it must be said that the public encourages such behavior. In January 2009, The Kudlow Report (starring none other than Kudlow himself) began airing on CNBS. Indeed, sticking to your guns even in the face of reality seems to be one of the most important lessons for experts who wish to come up with the upper hand in the present – and assume correctly that few if any would force them to come to terms with their forecasts from the past.
In this text, the first of several, I’ve covered the Failure of Nerve in foresight and forecasting. The Failure of Nerve was originally identified by Arthur C. Clarke, but I’ve tried to make use of our current understanding of behavioral psychology to add more depth and to identify ways for people to overcome this all-too-common failure. Another book which has been very helpful in this endeavor was the recently published Superforecasting by Philip E. Tetlock and Dan Gardner, which you should definitely read if you’re interested in the art and science of forecasting.
There are obviously several other failures in foresight, which I will cover in future articles on the subject.
Today, the Nobel Prize winners in the field of medicine were announced. All three winners are esteemed scientists who have discovered “therapies that have revolutionized the treatment of some of the most devastating parasitic diseases”, according to the Nobel committee. This is doubtlessly true: two of the winners’ discoveries have led to the development of a drug that has nearly brought an end to river blindness; the third scientist developed a drug that has reduced mortality from malaria by 30 percent in children, and saves over 100,000 lives each year.
I could go on about the myriad of ways in which medicine is improving the human condition worldwide, or about how we’re eradicating some diseases that have inflicted the human race since times unknown. I won’t do that. The progress of medicine is self-evident, and in any case is a matter for a longer blog post. Instead, let us focus on a different venture: the attempt to forecast the Nobel Prize winners.
The Citation Laureates
Every year since 2002, the Thomson Reuters media and information corporation makes a shot at forecasting the Nobel laureates. To that end, they analyze the most highly cited research papers in every field, and the authors behind them. One’s prestige as a scientist largely comes from high citation rate – i.e. the number of times people have referred to your work when conducting their own research. It’s therefore clear why this single simple parameter, so easily quantified, could serve as a good base for forecasting the annual Nobel winners.
So far, it looks like Thomson Reuters have done quite well with their forecasts. In every year except 2004, they have successfully identified at least one Nobel Prize winner in all the scientific fields: Physiology or Medicine, Physics, Chemistry and Economics. Overall, Thomson Reuters has “correctly forecast 21 of 52 science Nobel Prizes awarded over the last 13 years”.
It is fascinating for me that by working with tools for the analysis of big data, one could reach such a high rate of success in forecasting the decisions made by the Nobel committees. But here’s the deeper issue, in my opinion: Thomson Reuters clearly intends only to forecast the Nobel winners – but is it possible that their selection is more accurate than that of the Nobel committee?
The Limits of Committees
How is the Nobel Prize decided? Every year, thousands of distinguished professors from around the world are asked to nominate colleagues who deserve the prize. Each committee for the scientific prizes ends up with 250-350 nominees, whom they then screen and analyze in order to come up with only a few recommendations that will be presented to the 615 members of the Royal Swedish Academy of Sciences – and they will vote for the final winners.
Note that the rate-limiting step in the process is contained in the hands of the committee members. The number of members changes between each committee, but generally ranges between 6 and 8 members in each committee. And as anyone who has ever taken part of any committee discussion knows, there are usually only two or three people who really influence and shape the debate. In other words, if you want to have a real chance at winning the Nobel Prize in your field, you had best develop your connections with the most influential members of the appropriate committee.
Please note that I’m not accusing the Nobel committees of fraud or nepotism. However, we know that even the best and most reliable experts in the world are subject to human biases – sometimes without even realizing that. The human mind, after all, is a strangely convoluted place, with most of the decision making process being handled subconsciously. Individual decision makers are therefore biased by nature, as are small committees. The Nobel Laureates selection process, therefore, is biased – which I guess we all know anyway – and even worse, it remains under wraps, and the actual discussions taking place are not shared by the public for criticism.
Examples for (alleged) bias can be found easily (heck, there’s an entire Wikipedia page dedicated to the subject). Henry Eyring allegedly failed to receive the Nobel Prize because of his Mormon faith; Paul Krugman received the prize because of (again, allegedly) left-leaning bias of the committee; and when the scientist behind HPV discovery was selected to receive the prize, an anticorruption investigation followed soon after since two senior figures on the committee had strong links with a pharmaceutical company dealing with HPV vaccines.
The Wisdom of Data
Now consider the core of the Thomson Reuters process. The company’s analysts go over all the papers and citations in an automated fashion, conducted by algorithms that they define. The algorithms are only biased if they’re created that way – which means that the algorithms and the entire process will need to be fully transparent. The algorithms can cut down the list of potential candidates into a mere dozen or so – and then allow the Royal Swedish Academy do the rest of the work and vote for the top ones.
Is this process necessarily better than the committee? Obviously, many flaws still abound. The automated process could put more emphasis on charismatic ‘rock stars’ of the scientific world, for example, and neglect the more down-to-earth scientists. Or it could focus on those scientists who are incredibly well-connected and who have many collaborations, while leaving aside those scientists who only made one big impact in their field. However, proper programming of the algorithms – and accurately defining the parameters and factors behind the selection process – should take care of these issues.
Does this process, in which an automated algorithm picks a human winner, seems weird to you? It shouldn’t, because it’s happening on the World Wide Web every second. Each time you’re doing a Google search, the computer goes over millions of possible results and only shows you the ‘winners’ at the top, according to factors that include their links to each other (i.e. number of citations), the reputation of the site, and other parameters. Google has brought this selection process down to a form of art – and an accurate science.
Why not do that to the Nobel Prize as well?
Your Nobel Forecast
Over the next week, the recipients of the Nobel Prize will be announced one after the other. Would you like to impress your friends by forecasting the recipients? Here’s an infographic made by Thomson Reuters and detailing their forecasts for 2015. Good luck to everyone in it!