Futuronymity: Keeping Our Privacy from Our Grandchildren

History is a story that will never be told fully. So much of the information is lost to the past. So much – almost all – the information is gone, or has never been recorded. We can barely make sense of the present, in which information about the events and the people behind them keeps being released every day. What chance do we have, then, at fully deciphering the complex stories underlying history – the betrayals, the upheavals, the personal stories of the individuals who shaped events?

The answer has to be that we have no way of reaching any certainty about the stories we tell ourselves about our past.

But we do make some efforts.

Medical doctors and historians are trying to make sense of biographies and ancient skeletons, in order to retro-diagnose ancient kings and queens. Occasionally they identify diseases and disorders that were unknown and misunderstood at the time those individuals actually lived. Mummies of ancient pharaohs are x-rayed, and we suddenly have a better understanding of a story that unfolded more than two thousand years ago and realize that the pharaoh Ramesses II suffered from a degenerative spinal condition.

Similarly, geneticists and microbiologists use DNA evidence to end mysteries and find conclusive endings to some historical stories. DNA evidence from bones has allowed us to put to rest the rumors, for example, that the two children of Czar Nicholas II survived the 1918 revolution in Russia.

Russian_Imperial_Family_1911
The Russian czar Nicholas II with his family. DNA evidence now shows conclusively that Anastasia, the youngest daughter, did not survive the mass execution of the family in 1918. Source: Wikipedia

The above examples have something in common: they all require hard work by human experts. The experts need to pore over ancient histories, analyze the data and the evidence, and at the same time have good understanding of the science and medicine of the present.

What happens, though, when we let a computer perform similar analyses in an automatic fashion? How many stories about the past could we resolve then?

We are rapidly making progress towards such achievements. Recently, three authors from Waseda University in Japan have published a new paper showing they can use a computer to colorize old black & white photos. They rely on convolutional neural networks, which are in effect a simulation of certain structures of a biological brain. Convolutional neural networks have a strong capacity for learning, and can thus be trained to perform certain cognitive tasks – like adding color to old photos. While computerized coloring has been developed and used before, the authors’ methodology seems to achieve better results than others before them, with 92.6 percent of the colored images looking natural to users.

Colorized pictures from the past
Colorized black & white pictures from the past. AI engine was used to add color – essentially new information – to these hints from our past. Source: paper by Iizuka, Simo-Serra and Ishikawa

This is essentially an expert system, an AI engine operating in a way similar to that of the human brain. It studies thousands of thousands of pictures, and then applies its insights to new pictures. Moreover, the system can now go autonomously over every picture ever taken, and add a new layer of information to it.

There are boundaries to the method, of course. Even the best AI engine can miss its mark in cases where the existing information is not sufficient to produce a reliable insight. In the examples below you can see that the AI colored the tent orange rather than blue, since it had no way of knowing what color it was originally.

But will that stay the case forever?

Colorized black & white picture - with wrong color
Colorized black & white picture that was colored incorrectly since no information existed about the tent from other sources. Source: paper by Iizuka, Simo-Serra and Ishikawa

As I previously discussed in the Failures of Foresight series of posts on this blog, the Failure of Segregation is making it difficult for us to forecast the future because we’re trying to look at each trend and each piece of evidence on its own. Let’s try to work past that failure, and instead consider what happens when an AI expert coloring system is combined with an AI system that recognizes items like tents and associates them with certain brands, and can even analyze how many tents of each color of that brand were sold on every year – or at least what was the most favorite tent color for people at that time.

When you combine all of those AI engines together, you get a machine that can tell you a highly nuanced story about the past. Much of it is guesswork, obviously, but those are quite educated guesses.

 

The Artificial Exploration of the Past

In the near future, we’ll use many different kinds of AI expert systems to explore the stories of the past. Some artificial historians will discover cycles in history – princes assassinating their kingly fathers, for example – that have a higher probability to occur, and will analyze ancient stories accordingly. Other artificial historians will compare genealogies, while yet others will analyze ancient scriptures and identify different patterns of writing. In fact, such an algorithm had already been applied to the Bible, revealing that the Torah has been written by several different authors and distinguishing between them.

The artificial exploration of the past is going to add many fascinating details to stories which we’ve long thought were settled and concluded. But it also raises an important question: when our children and children’s children look back at our present and try to derive meaning from it – what will they find out? How complete will their stories of their past and our present be?

I suspect the stories – the actual knowledge and understanding of the order between events – will be even more complete than what we who dwell in the present know about.

 

Past-Future

In the not-so-far-away future, machines will be used to analyze all of the world’s data from the early 21st century. This is a massive amount of data: 2.5 quintillion bytes of data are created daily, which would fill ten million blu-ray discs altogether. It is astounding to realize that 90 percent of the world’s data today has been created just in the last two years. Human researchers would not be able to make much sense of it, but advanced AI algorithms – a super-intelligence, in some ways – could actually have the tools to crosslink many different pieces of information together to obtain the story of the present: to find out what movies families had watched on a specific day, in which hotel the President of the United States stayed during a recent visit to France and what snacks he ordered on room service, and many other paraphernalia.

Are those details useless? They may seem so to our limited human comprehension, but they will form the basis for the AI engines to better understand the past, and produce better stories of it. When the people of the future will try to understand how World War 3 broke out, their AI historians may actually conclude that it all began with a presidential case of indigestion which happened at a certain French hotel, and which annoyed the American president so much that it had prevented him from making the most rational choices in the next couple of days. An hypothetical scenario, obviously.

 

Futuronymity – Maintaining Our Privacy from the Future

We are gaining improved tools to explore the past with, and to derive insights and new knowledge even where information is missing. These tools will be improved further in the future, and will be used to analyze our current times – the early 21st century – as well.

What does it mean for you and me?

Most importantly, we should realize that almost every action you take in the virtual world will be scrutinized by your children’s children, probably after your death. Your actions in the virtual world are recorded all the time, and if the documentation survives into the future, then the next generations are going to know all about your browsing habits in the middle of the night. Yes, even though you turned incognito mode on.

This means we need to develop a new concept for privacy: futuronymity (derived from Future and Anonymity) which will obscure our lives from the eyes of future generations. Politicians are always concerned about this kind of privacy, since they know their critical decisions will be considered and analyzed by historians. In the future, common people will find themselves under similar scrutiny by their progenies. If our current hobby is going to psychologists to understand just how our parents ruined us, then the hobby of our grandchildren will be to go to the computer to find out the same.

Do we even have the right to futuronymity? Should we hide from next generations the truth about how their future was formed, and who was responsible?

That question is no longer in the hands of individuals. In the past, private people could’ve just incinerated their hard drives with all the information on them. Today, most of the information is in the hands of corporations and governments. If we want them to dispose of it – if we want any say in which parts they’ll preserve and which will be deleted – we should speak up now.

 

 

Advertisements

The Failure of Myth and the Future of Medical Mistakes

 

Please note: this is another chapter in a series of blog posts about Failures in Foresight. You may want to also read the other blog posts dealing with the Failure of Nerve, the Failure of the Paradigm, and the Failure of Segregation.

 

At the 1900 World Exhibition in Paris, French artists made an attempt to forecast the shape of the world in 2000. They produced a few dozens of vivid and imaginative drawings (clearly they did not succumb to the Failure of the Paradigm!)

Here are a few samples from the World Exhibition. Can you tell what all of those have in common with each other?

military-cycles-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Police motorcycles in the year 2000
skype-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Skype in the year 2000
phonographs-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Phonecalls and radio in the year 2000
birding-what-1900-french-artists-thought-the-year-200-would-be-like.jpg
Fishing for birds in the year 2000

 

Psychologist Daniel Gilbert wrote about similar depictions of the future in his book “Stumbling on Happiness”

“If you leaf through a few of them, you quickly notice that each of these books says more about the times in which it was written than about the times it was meant to foretell.”

You only need to take another look at the images to convince yourselves of the truth of Gilbert’s statement. The women and men are dressed in the same way they were dressed in 1900, except for when they go ‘bird hunting’ – in which case the gentlemen wear practical swimming suits, whereas the ladies still stick with their cumbersome dresses underwater. Policemen still employ swords and brass helmets, and of course there are no policewomen. Last but not least, it seems that the future is entirely reserved to the Caucasian race, since nowhere in these drawings can you see persons of African or Asian descent.

 

The Failure of Myth

While some of the technologies depicted in these ancient paintings actually became reality (Skype is a nice example), it clear the artists completely failed to capture a larger change. You may call this a change in the zeitgeist, the spirit of the generation, or in the myths that surround our existence and lives. I’ll be calling this A Failure of Myth, and I hope you’ll agree that it’s impossible to consider the future without also taking into account these changes in our mythologies and underlying social and cultural assumptions: men can be equal to women, colored folks have rights similar to white folks, and people of the LGBT have just the same right to exist as heterosexuals. None of these assumptions would’ve been obvious, or included in the myths and stories upon which society is bases, a mere fifty years ago. Today they’re being taken for granted.

 

1013px-USMC-09611.jpg
The myth according to which black people have very few real rights was overturned in the 1960s. Few forecasters thoguht of such an occurence in advance.

 

Could we ever have forecast these changes?

Much as in the Failure of the Paradigm, I would posit that we could never accurately forecast the future ways in which myths and culture is about to change. We could hazard some guesses, but that’s just what they are: a guesswork that relies more on our myths in the present, than on solid understanding of the future.

That said, there are certain methodologies used by foresight researchers that could help us at least chart different solutions to problems in the present, in ways that force us to consider our current myths and worldviews – and challenge them when needed. These methodologies allow us to create alternative futures that could be vastly different from the present in the ways that really matter: how people think of themselves, of each other, and of the world around them.

One of the best known methodologies used for this purpose is called Causal Layered Analysis (CLA). It was invented by futures studies expert Sohail Inayatullah, who also describes case studies for using it in his recent book “What Works: Case Studies in the Practice of Foresight”.

In the rest of this blog post, I’ll sum up the practical principles of CLA, and show how they could be used to analyze different issues dealing with the future. Following that, in the next blog post, we’ll take a look again at the issue of aerial drones used for terrorist attacks, and use CLA to consider ways to deal with the threat.

 

Mines_1.jpg
Another Failure of Myth: the ancient Greek could not imagine a future without slavery. None of their great philosophers could escape the myth of slavery. Image originally from Wikipedia

 

 

CLA – Causal Layered Analysis

The core of CLA the idea that every problem can be looked at in four successive layers, each deeper than the previous one. Let’s look at each layer at its turn, and see how each layer adds depth to a discussion about a certain problem: the “high rate of medical mistakes leading to serious injury or death”, as Inayatullah describes in his book. My brief analysis of this problem at every level is almost entirely based on his examples and thoughts.

First Layer: the Litany

The litany is the day-to-day talk. When you’re arguing at dinner parties about the present and the future, you’re almost certainly using the first layer. You’re basically repeating whatever you’ve heard from the media, from the politicians, from thought leaders and from your family. You may make use of data and statistics, but these are only interpreted according to the prevalent and common worldview that most people share.

When we rely on the first layer to consider the issue of medical mistakes, we look at the problem in a largely superficial manner. We can sum the approach in one sentence: “physicians make mistakes? Teach them better, and if they still don’t improve, throw them to jail!” In effect, we’re focusing on the people who are making the mistake – the ones whom it’s so easy to blame. The solutions in this layer are usually short-term solutions, and can be summed up in short sentences that appeal to audiences who share the same worldview.

Second Layer: the Systemic View

Using the systemic view of the second layer, we try to delve deeper into the issue. We don’t blame people anymore (although that does not mean we remove the responsibility to their mistakes from their shoulders), but instead we try to understand how the system itself can contribute to the actions of the individual. To do that we analyze the social, economic and political forces that meld the system into its current shape.

In the case of medical mistakes, the second layer encourages us to start asking tougher questions about the systems under which physicians operate. Could it be, for example, that physicians are rushing their treatments since they are only allowed to talk with each patient 5-10 minutes, as is the custom in many public medical services? Or perhaps the shape of the hospital does not allow physicians to consult easily with each other, thus reaching more solid solutions via teamwork?

The questions asked in the second layer mode of thinking allow us to improve the system itself and make it more efficient. We do not take the responsibility off the shoulders of the individuals, but we do accept that better systems allow and encourage individuals to reach their maximum efficiency.

Third Layer: Worldview

This is the layer where things get hoary for most people. In this layer we try to identify and question the prevalent worldview and how it contributes to the issue. These are our “cognitive lenses” via which we view and interpret the world.

As we try to analyze the issue of medical mistakes in the third layer, we begin to identify the worldviews behind medicine. We see that in modern medicine, the doctor is standing “high above” in the hierarchy of knowledge – certainly much higher than patients. This hierarchy of knowledge and prestige defines the relationship between the physician and the patient. As we understand this worldview, solutions that would’ve fit in the second layer – like the time physicians spend with patients – seem more like a small bandage on a gut wound, than an effective way to deal with the issue.

Another worldview that can be identified and challenges in this layer is the idea that patients actually need to go to clinics or to hospitals for check-ups. In an era of tele-presence and electronics, why not make use of wearable computing or digital doctors to take care of many patients? As we see this worldview and propose alternatives, we find that systemic solutions like “changing the shape of the hospitals” become unnecessary once more.

Fourth Layer: the Myth

The last layer, the myth, deals with the stories we tell ourselves and our children about the world and the ways things work. Mythologies are defined by Wikipedia as –

“a collection of myths… [and] stories … [that] explain nature, history, and customs.”

Make no mistake: our children’s books are all myths that serve to teach children how they should behave in society. When my son reads about Curious George, he learns that unrestrained curiosity can lead you into danger, but also to unexpected rewards. When he reads about Hensel and Gretel, he learns of the dangers of trusting strangers and step-moms. Even fantasy books teach us myths about the value of wisdom, physical prowess and even beauty as the tall, handsome prince saves the day. Myths are perpetuated everywhere in culture, and are constantly strengthened in our minds through the media.

What can we say about medical mistakes in the Myth level? Inayatullah believes that the deepest problem, immortalized in myth throughout the last two millennia, is that “the doctor knows best”. Patients are taught from a very young age that the physician’s verdict is more important than their own thoughts and feelings, and that they should not argue against it.

While I see the point in Inayatullah’s view, I’m not as certain that it is the reason behind medical mistakes. Instead, I would add a partner-myth: “the human doctor knows best”. This myth is spread to medical doctors in many institutes, and makes it more difficult to them to rely on computerized analysis, or even to consider that as human beings they’re biased by nature.

 

Consolidating the Layers

As you may have realized by now, CLA is not used to forecast one accurate future, but is instead meant to deepen our thinking about potential futures. Any discussion about long-term issues should open with an analysis of those issues in each of the four layers, so that the solutions we propose – i.e. the alternative futures – can deal not only with the superficial aspects of the issue, but also with the deeper causes and roots.

 

Conclusion

The Failure of Myth – i.e. our difficulty to realize that the future will not only change technologically, but also in the myths and worldviews we hold – is impossible to counter completely. We can’t know which myths will be promoted by future generations, just as we can’t forecast scientific breakthroughs fifty years in advance.

At most, we can be aware of the existence of the Failure of Myth in every discussion we hold about the future. We must assume, time after time, that the myths of future generations will be different from ours. My grandchildren may look at their meat-eating grandfather in horror, or laugh behind his back at his pants and shirt – while they walk naked in the streets. They may believe that complicated decisions should be left solely to computers, or that physical work should never be performed by human beings. These are just some of the possible myths that future generations can develop for themselves.

In the next blog post, I’ll go over the issue of aerial drones use for terrorist attacks, and analyze it by using CLA to identify a few possible myths and worldviews that we may need to change in order to deal with this threat.

 

Please note: this is another chapter in a series of blog posts about Failures in Foresight. You may want to also read the other blog posts dealing with the Failure of Nerve, the Failure of the Paradigm, and the Failure of Segregation.

Conspiracy Theories – Past, Present and Future

A few years ago I gave a short lecture about conspiracy theories, in which I described the HAARP: High Frequency Active Auroral Research Program. I explained about some of the purposes and goals of the project, most of which dealt with influencing the ionosphere to aid radio wave transmission. The lecture was recorded and uploaded to Youtube (in Hebrew, so I’m not going to link to it here), and apparently was picked up by some conspiracy theorists – particularly chemtrails activists – as proof that I support and endorse their ideas.

The said conspiracy theories are long and convoluted, but most activists seem to agree on one point: a shadow organization is controlling all governments, and is using climate and weather engineering technologies to spread toxic materials throughout the environment. These toxic materials infect people with autism, Alzheimer’s disease, cancer, and occasionally also assert some form of mind control to calm the distraught and dying population. Why are shadowy government / the Illuminati / the Free Masons doing all that? The most detailed version of the tale I’ve found was that they want to eliminate most of the human population on Earth, in order to return us to the olden days of sustainability. And that, in an incredibly minimalized nutshell, is the conspiracy theory behind chemtrails.

 

pentagram-176618_1280
Chemtrails: are ‘they’ poisoning us all?

 

Needless to say, these ideas are very far away from my own. And yet, my own reading about conspiracies in the past and present has led me to raise some uncomfortable questions of my own. How can we know, for one, when a conspiracy theory has a grain of truth in it, or when it’s completely false?

Real Conspiracies – Past and Present

The truth is that there is some basis to believing in conspiracy theories. Governments can act maliciously against the common citizen, or against a group of citizens – and even hide evidence of their wrongdoings. Some conspiracy theories from the past that turned right include –

  • The government is spying on us: the belief that the U.S. government is on after us all was confirmed when Snowden released highly classified documents that proved once and for all that several large software and hardware companies secretly provided the government with access to their data. Using this data, the government could essentially read every e-mail sent by targeted individuals, and follow their every move online. As it turns out, this spying program is still taking place today.
  • The government infects us with diseases: during the 1940s, Guatemalan physicians had deliberately infected healthy Guatemalan citizens with syphilis, under the guidance and funded assistance of the United States. The documentation of the experiments was only discovered in 2005, but there is no doubt today that this “dark chapter in the history of medicine”, as the NIH director called them, actually occurred. In fact, the U.S. has submitted a formal apology for these incidents.
  • The government is controlling our minds: this one is trickier than the rest, since one has to define ‘mind control’ before trying to figure out how the government is actually doing that. It’s pretty obvious that even democratic governments certainly influence our paradigms and belief systems, and are constantly trying to shape us into becoming more productive and respectful of each other, since that serves the greater good of both the government and the citizens themselves. That is why governments are funding public schools, after all, and I see very few people complaining about that form of mind control.

A more delicate form of ‘mind control’, more accurately described as “subtle persuasion”, is beginning to be utilized mainly by political candidates. By making use of big data collected about citizens, Obama’s team of data scientists have pinpointed the “highly persuadable” voters during the 2012 elections campaign, and targeted them specifically during the campaign. As Sasha Issenberg describes in her article in MIT Technology Review, the data scientists have even figured out how best to approach individuals and persuade them according to dozens of different parameters. This is a form of persuasion that should be viewed with much suspicion, since the data scientists are in effect finding the best ‘keys’ to use for every person’s locked cognition – and who among us does not have such keys? So yes – in a way, politicians do try to control our minds, but in very delicate and subtle ways.

Obviously, this is a very short list of past and present conspiracy theories that turned out real after all, or have never been denied, in the case of the third one. There are many others, and I would encourage you to read Conspiracy Theories and Secret Societies for Dummies, if you want to know more about them. For now, it is enough for us to understand that yes – occasionally, conspiracy theories can turn out very real indeed.

That said, there are differences between the popular conspiracy theories, and the ones that have turned out real. We can identify those differences in order to figure out which conspiracy theories should be considered carefully, and which we can ignore.

Real vs. Spurious Conspiracy Theories

There are four claims or assumptions that are exceedingly common in conspiracy theories, and should raise alarm bells in our minds when we hear them. The presence of any one of the following in a conspiracy theory should make us doubt its authenticity. These are –

  • Claims unsubstantiated by science: we’re talking here about the more spurious claims – witchcraft and ion-waves controlling people’s minds, for example, or claims about the government engineering the global climate, when environmental scientists are still scratching their heads and trying to understand just how we can negate global warming.

 

b6f5d21b6c62db9f804a7d53b1663b24.jpg
Science: not a liberal conspiracy. Taken from Pinterest. Originally attribted to Carl Sagan.
  • Claims requiring extremely unlikely collaborations: is there truly a ‘shadow government’ striving for a single goal? It would have to include sets of sworn enemies like Iran and Israel, North Korea and South Korea, India and Pakistan. Do you believe that none of the politicians from more than a hundred nations, their assistants, and all of those involved in international relations, have never exposed this kind of an overarching government to the media? I almost wish for the existence of such a shadow government, since it’ll show that nations can get along after all for a single purpose.
  • Claims that require the people in charge to put themselves and their families in danger: one of the top claims of the chemtrails conspiracy theories is that the government is trying to poison us all from above. Such an approach would obviously also injure the people in charge and their families, who are breathing the same air as we are. It really takes suspense of disbelief to the maximum to believe that people would deliberately cause harm to themselves and their families.
  • Extreme and inexplicable clumsiness in execution: why does the ‘shadow government’ want to spread Alzheimer’s disease, cancer and infections all around? Why is the government “dropping pathogens and other, more threatening materials, aimed at making us sick” in Edward Group’s words? One of the more popular explanations is that ‘they’ want to minimize world’s population. But if that’s the case, why use Alzheimer’s disease, which mainly disables the elderly? Why cancer – a disease correlated with old age? And why use diseases that make people ill for a very long time until they die, while forcing their relatives to take care of them – thus damaging the world’s economy? It’s difficult to believe any organization sophisticated and efficient enough to keep the original plot secret would flounder so badly when it comes to execution in such inefficient ways.

You can see that in all three real conspiracies I detailed above, none of these three assumptions takes place. In all three cases there is a valid scientific basis behind the conspiracy theory, the collaborations between the ‘plotters’ and the executioners are plausible, and when somebody gets harmed (as in the case of the syphilis experiment), it’s never the perpetrators of the conspiracy. Last but not least, the methods for execution of the conspiracy largely make sense and are as efficient as can be considering the scientific and technological limitations.

 

The Chemtrails Test

How does the chemtrails theory stand when tested for these three warning signs? Not well at all. The idea that governments mind-control or infect the population with diseases using volatile compounds spread from above does not stand up to scientific scrutiny (1st warning sign). Even if the government could do that, it seems like an extremely clumsy execution (4th warning sign): why should the government repeatedly spread toxic materials in the air, in the most noticeable way possible, instead of doing it just a few times at low altitude where such materials would have more effect? Why not spread the materials in drinking water, or in foodstuff?

These claims seem even more bizarre when one realizes that transmission of pathogens and/or mind altering drugs through the air will definitely cause injury to the families of decision makers, as they breathe the same air everybody does (3rd warning sign). And last but not least: execution of such a plan would require collaboration between a large number of entities (2nd warning sign): scientists, airplane pilots, and even diplomats, politicians and heads of states from all over the globe. It seems extremely unlikely that such a collaboration could occur, or be kept under shrouds for long.

 

Conclusion

It’s important to understand that conspiracy theories occasionally do turn out real, at least partially. The ‘weirdness factor’ of the theory does not necessarily exclude it from rigorous deliberation, since the future always seems weird to us from our viewpoint in the present (see Failure of the Paradigm for more on that). However, we can differentiate between certain, more plausible, conspiracy theories, and others that are much less plausible – and therefore require more evidence before we can consider them seriously.

In this post I highlighted four warning signs that could help us steer clear of certain conspiracy theories, unless their advocates provide us with much more significant evidence than they currently have. These warning signs apply to conspiracy theories about aliens and alien abductions, to anti-vaccination conspiracy theories, and yes – to the chemtrails theories as well.

Twenty or thirty years from today, we will likely look back at some of the conspiracy theories of the past, and recognize in hindsight that a small number of them had some merit. But I’m pretty sure it won’t be the theory about chemtrails.

 

Can We Defend Our Culture From Terrorist Attacks? Yes, by Virtualizing It

I gave a lecture in front of the Jewish Alliance of Greater Rhode Island, which is a lot like the Justice League, but Jewish. I was telling them about all the ways in which the world is becoming a better place, and all the reasons for these trends to go on into the future. There are plenty of reasons for optimism: more people are literate than ever before; the number of people suffering from extreme poverty is rapidly declining and is about to fall below 10% for the first time ever in human history; and the exponential progress in solar energy could ensure that decontamination and desalination devices could operate everywhere, overcoming the water crisis that many believe looms ahead.

After the lecture was done I opened the stage for questions. The first one was short and to the point: “What about terrorists?”

It does look like nowadays, following the attacks on Paris, terrorists are on everybody’s mind. However, it must be said that while attacks against civilians are deplorable, terrorists have generally had very little success with those. The September 11 Attacks carried the worst death toll of all terrorist attacks in recent history, in which just 19 plane hijackers killed 2,977 people. While terrorism may yet progress to using chemical and biological warfare, so far it is relatively harmless when you only calculate the cost in lives, and mostly affects the morale of the people.

I would say the question that’s really bothering people is whether terrorists can eventually deal a debilitating deathblow to Western culture, or at the very least create a disturbance severe enough to make that culture go into rapid decline. And that raises an interesting question: can we find a way to conserve our culture, our values and our monuments for good?

I believe we have already found a way to do that, and Wikipedia is a shining example.

 

Creative Destruction and Wikipedia

Spot the Dog is a series of children’s books about the adventures of Spot (the dog). In July 3, 2012, the Wikipedia entry for Spot the Dog was changed to acknowledge that the author of the series was, in fact, no other than Ernest Hemingway under the pseudonym Eric Hill. In the revised Wikipedia entry the readers learned about “Spot, a young golden retriever who struggles with alcoholism and a shattered sense of masculinity.”

Needless to say, this was a hoax. Spot is obviously a St. Bernard puppy, and not a “young golden retriever”.

 

spotthedogwikibombupdate.png

 

 

What’s interesting is that within ten minutes of the hoax’ perpetration, it was removed and the original article was published as if nothing wrong had ever happened. That is not surprising to us, since we’ve gotten used to the fact that Wikipedia keeps backups of every article and of every revision ever made to it. If something goes wrong – the editors just pull up the latest version before the incident.

A system of this kind can only exist in the virtual world, because of a unique phenomenon: due to the exponential growth in computing capabilities and data storage, bits now cost less than atoms. The cost for keeping a virtual copy of every book ever written is vastly lower than keeping such copies on paper in the ‘real’ world – i.e. our physical reality.

The result is that Wikipedia is invulnerable to destruction and virtual terrorism as long as there are people who care enough to restore it to its previous state, and that the data can be distributed easily between people and computers instead of remaining in one centralized data-bank. The virtualization and distribution of the data has essentially immortalized it.

Can we immortalize objects in the physical world as well?

 

Immortalization via Virtualization

In February 27, 2015, Islamic State militants brought sledgehammers into the Mosul museum, and have carefully and thoroughly shattered an unknown number of ancient statues and artefacts from the Assyrian era. In effect, the terrorists have committed a crime of cultural murder. It is probable that several of the artefacts destroyed in this manner have no virtual representation yet, and are thus gone forever. They are, in a very real sense of the word, dead.

52aff8f727bbc1fafc1c52fa3e78d026 (1).jpeg
An Islamic State militant destroying an ancient statue inside the Mosul Museum in Nineveh. Source: AFP

 

Preventing such a tragedy from ever occurring again is entirely within our capabilities. We simply need to obtain high-resolution scans of every artefact in every museum. Such a venture would certainly come at a steep cost – quite possibly more than a billion dollars – but is that such a high price to pay for immortalizing the past?

These kinds of ventures have already begun sprouting up around the world. The Smithsonian is scanning artefacts and even entire prehistoric caves, and are distributing those scans among history enthusiasts around the world. What better way to ensure that these creations will last forever? Similarly, Google is adding hundreds of 3D models of art pieces to its Google Art Project Initiative. That’s a very good start to a longer-term process, and if things keep making progress this way, we will probably immortalize most of the world’s artefacts within a decade, and major architectural monuments will follow soon after. Indeed, one could well say that Google’s Street View project is preserving our cities for eternity.

(If you want to see the immortal model of an ancient art piece, just click on the next link – )

https://sketchfab.com/models/ad88abf5596f46ab90c5dc4eb23f8a8e/embed

Architecture and history, then, are rapidly gaining invulnerability. The terrorists of the present have a ‘grace period’ to destroy some more pieces of art, but as go forward into the future, most of that art will be preserved in the virtual world, to be viewed by all – and also to be recreated as needed.

So we’ll save (pun fully intended) our history and culture, but what about ourselves? Can we create virtual manifestations of our human selves in the digital world?

That might actually be possible in the foreseeable future.

 

Eternime – The Eternal Me

Eternime is just one of several highly ambitious companies and projects who try to create a virtual manifestation of an individual: you, me, or anybody else. The entrepreneurs behind this start-up have leaped into fame in 2014 when they announced their plans to create intelligent avatars for every person. By going over the abundance of information we’re leaving in our social networks, and by receiving as input answers to many different questions about a certain individual’s life, those avatars would be able to answer questions just as if they were that same individual.

 

 

Efforts for the virtualization of the self are also taking place in the academy, as was demonstrated in a new initiative: New Dimensions in Testimony, opened in the University of South California and led by Bill Swartout, David Traum, and Paul Debevec. In the project, interviews with holocaust survivors are recorded and separated into hundreds of different answers, which the avatar then provides when asked.

I think the creators of both projects will agree that they are still in very early phases, and that nobody will mistake the avatars for accurate recreations of the original individuals they were based on. However, as they say, “It’s a good start”. As data storage, computing capabilities and recording devices continue to grow exponentially, we can expect more and more virtualization of individuals to take place, so that their memories and even personalities are kept online for a very long time. If we take care to distribute these virtual personalities around the world, they will be virtually immune to almost all terrorism acts, except for the largest ones possible.

 

Conclusion

In recent decades we’ve started creating virtual manifestations of information, objects and even human beings, and distributed them throughout the world. Highly distributed virtual elements are exceedingly difficult to destroy or corrupt, as long as there’s a community that acknowledges their worth, and thus can be conserved for an extremely long time. While the original physical objects are extremely vulnerable to terrorist attacks, their virtual manifestations are generally immune to any wrongdoing.

So what should we do to protect our culture from terrorism? Virtualize it all. 3D Scan every monument and every statue, every delicate porcelain cup and every ancient book in high resolution, and upload it all to the internet, where it can be shared freely between the people of the world. The physical monuments can and will be destroyed at some point in the future. The virtual ones will carry on.

 

 

 

 

 

Why Science Fiction is Necessary for Our Survival in the Future

Two weeks ago it was “Back to the Future Day”. More specifically, Doc and Marty McFly reached the future at exactly October 21st, 2015 in the second movie in the series. Me being a futurist, I was invited to several television and radio talk shows to discuss the shape of things to come, which is pretty ridiculous, considering that the future is always about to come, and we should talk about it every day, and not just in a day arbitrarily chosen by the scriptwriters of a popular movie.

All the same, I’ll admit I had an uplifting feeling. On October 21st, everybody was talking about the future. That made me realize something about science fiction: we really need it. Not just for the technological ideas that it gives us (like cellular phones and Tricorders from Star Trek), but also for the expanded view of the future that it provides us with.

Sci-fi movies and book take root in our culture, and establish a longing and an expectation to a well-defined future. In that way, sci-fi creations provide us with a valuable social tool: a radically prolonged Cycle-time, which is the length of time an individual in society tends to look forward to and plan for in advance.

Cycle-times in the Past

As human beings, and as living organisms in general, mother evolution has shaped us into fulfilling one main goal: transferring our genes to our descendants. We are, in a paraphrase of Richard Dawkins’ quote, trucks that carry the load of our genes into the future, as far as possible from our current starting point. It is curious realize that in order to preserve our genes into the future, we must be almost totally aware of the present. A prehistorical person who was not always on the alert for encroaching wolves, lions and tigers, would not have survived very long. Millions of years of evolution have designed living organisms so that they focus almost entirely on the present.

And so, for the first few tens of thousands years of human existence, we ran away from the tigers and chased after the deer, with a very short cycle-time, probably lasting less than a day.

It is difficult, if not impossible, to know when exactly we managed to strike a bargain with Grandfather Time. Such a bargain provided the early humans great power, and all they needed to do in return was to measure and document the passing of hours and days. I believe that we’ve started measuring time quite early in human history, since time measurement brought power, and power ensured survivability and the passing of genes and time measurement methodologies to the next generation.

The first cycle-time was probably quite short, lasting less than a full day. Early humans could roughly calculate how long it will take the sun to set according to its position in the sky, and so they could know when to start or end a hunt before darkness fell. Their cycle-time was a single day. The woman who wanted to know her upcoming menstruation period – which could lead to drawing predators and making it more difficult for her to hunt – could do that by looking at the moon, and by making a mark on a stick every night. Her cycle-time was a full month.

The great leap forward occurred in agricultural civilizations, which were based on an understanding of the cyclical nature of time: a farmer must know the cyclical order of the seasons of the year, and realize their significance for his field and crops. Without looking ahead a full year into the future, agricultural civilizations could not reach their full height. And so, ten thousand years ago, the first agricultural civilizations set a cycle-time of a whole year.

And that is pretty much the way it remained ever since that time.

One of the most ancient cycle-times, and the most common one as well: the seasons of the year.
One of the most ancient cycle-times, and the most common one as well: the seasons of the year.

Religious Cycle-times

Religions initially had the potential to provide longer cycle-times. The clergies have often documented history and made an attempt to forecast the future – usually by creating or establishing complex mythologies. Judaism has prolonged the agricultural cycle-time, for example, by setting a seven year cycle of tending one’s field: six years of growing corps, and a seventh year (Shmita, in Hebrew) in which the fields are allowed to rest.

“For six years you are to sow your fields and harvest the crops, but during the seventh year let the land lie unplowed and unused.” – Exodus, 23, 10-11.

Most of the religious promises for the future, however, were usually vague, useless or even harmful. In his book, The Clock of the Long Now, Stewart Brand repeats an old joke that caricaturizes with more than a shred of truth the difficulties of the Abrahamic religions (i.e. Judaism, Christianity and Islam) in dealing with the future and creating useful cycle-times in the minds of their followers. “Judaism,” writes Brand, “says [that] the Messiah is going to come, and that’s the end of history. Christianity says [that] the Messiah is going to come back, and that’s the end of history. Islam says [that] the Messiah came, and history is irrelevant.” [the quote has been slightly modified for brevity]

While this is obviously a joke, it reflects a deeper truth: that religions (and cultures) tend to focus on a single momentous future, and ignore anything else that comes along. Worse, the vision of the future they give us is largely unhelpful since its veracity cannot be verified, and nobody is willing to set an actual date for the coming of the Messiah. Thus, followers of the Abrahamic religions continue their journey into the future, with their eyes covered with opaque glasses that have only one tiny hole to let the light in – and that hole is in the shape of the Messiah.

Religious futile cycle-time: everybody is waiting for the Messiah, who will come sometime, at some place, somehow.
Religious futile cycle-time: everybody is waiting for the Messiah, who will come sometime, at some place, somehow.

Why We Need Longer Cycle-times

When civilizations fail to consider the future in long cycle-times, they head towards inevitable failure and catastrophe. Jared Diamond illustrates this point time and time again in his masterpiece Collapse, in which he reviews several extinct civilizations, and the various ways in which they failed to adapt to their environment or plan ahead.

Diamond describes how the Easter Island folks did not think in cycle-times of trees and earth and soil, but instead thought in human shorter cycle-times. They greedily cut down too many of the trees in the island, and over several decades they squandered the island’s natural resources. Similarly, the settlers in Greenland could not think in a cycle-time long enough to contain the grasslands and the changing climate, and were forced to evacuate the island or freeze to death, after their goats and cattle damaged Greenland’s delicate ecology.

The agricultural civilizations, as I wrote earlier, tend to think by nature in cycle-times no longer than several years, and find it difficult to adjust their thinking into longer cycle-times: ones that apply to trees, earth and evolution of animal (and human) evolution. As a result, agricultural civilizations damage all of the above, disrupt their environment, and eventually disintegrate and collapse when their surroundings can’t support them anymore.

If we wish to keep humanity in existence overtime, we must switch to thinking in longer cycle-times that span decades and centuries. This is not to say that we should plan too far ahead – it’s always dangerous to forecast into the long-term – but we should constantly attempt to consider the consequences of our doings in the far-away future. We should always think of our children and grandchildren as we make steps that could determine their fate several decades away from now.

But how can we implement such long-term cycle-times into human culture?

If you still remember where I began this article, you probably realize the answer by now. In order to create cycle-times that last decades and centuries, we need to visit the future again and again in our imagination. We need to compare our achievements in the present to our expectations and visions of the future. This is, in effect, the end-result of science fiction movies and books: the best and most popular of them create new cycle-times that become entwined in human culture, and make us examine ourselves in the present, in the light of the future.

Movie Time

Science fiction movies and stories have an impressive capability to influence social consciousness. Karel Capek’s theater play R.U.R. from 1920, for example, had not only added the word “Robot” to the English lexicon, but has also infected western society with the fear that robots will take over mankind – just as they did in Capek’s play. Another influential movie, The Terminator, was released in 1984 and has solidified and consolidated that fear.

Science fictions does not have to make us fear the future, though. In Japanese culture, the cartoon robot Astro-Boy has become a national symbol in 1952, and ever since that time the Japanese are much more open and accepting towards robots.

Astro Boy: the science fiction series that made Japanese view robots much more warmly than the West.
Astro Boy: the science fiction series that made Japanese view robots much more warmly than the West.

The most influential science fiction creations are those that include dates, which in effect are forecasts for certain futures. These forecasts provide us with cycle-times that we can use to anchor our thinking whenever we contemplate the future. When the year 1984 has come, journalists all over the world tried to analyze society and see whether George Orwell’s dark and dystopian dream had actually come true. When October 21st 2015 was reached barely two weeks ago, I was interviewed almost all day long about the technological and societal forecasts made in Back to the Future. And when the year 2029 will finally come – the year in which Skynet is supposed to be controlling humanity according to The Terminator – I confidently forecast that numerous robotics experts will find themselves invited to talk shows and other media events.

As a result of the above science fiction creations, and many others, humanity is beginning to enjoy new and ambitious cycle-times: we look forward in our mind’s eye towards well-designated future dates, and examine whether our apocalyptic or utopian visions for them have actually come true. And what a journey into the future that is! The most humble cycle-times in science fiction span several decades ahead. The more grandiose ones leap forward to the year 2364 (Star Trek), 2800 (Dan Simmons’ Hyperion Cantos) or even to the end of the universe and back again (in Isaac Asimov’s short story The Last Question).

The longest cycle-times of science fiction – those dealing with thousands or even millions of years ahead – may not be particularly relevant for us. The shorter cycle-times of decades and centuries, however, receive immediate attention from society, and thus have an influence on the way we conduct ourselves in the present.

Conclusion

Humanity has great need of new cycle-times that will be far longer than any that were established in its history. While policy makers attempt to take into account forecasts that span decades ahead, the public is generally not exposed or influenced by such reports. Instead, the cycle-times of many citizens are calibrated according to popular science fiction creations.

Hopefully, those longer cycle-times would allow humanity to prepare in advance to existential longer-term challenges, such as ecological catastrophes or social collapse. At the very same time, longer cycle-times can also encourage and push forward innovation in certain areas, as entrepreneurs and innovators struggle to fulfill the prophecies that were made for certain technological developments in the future (just think of all the clunky hoverboards that were invented towards 2015 as proof).

In short, if you want to save the future, just write science fiction!

Batman Exists – and His Name Is Bill Gates

Today is Batman Day – the day in which we celebrate Batman’s triumph over evil, again. It seems Batman keeps on saving the world (or Gotham, at least) at least once a year, and yet the baddies just keep on streaming to his doorstep. While frustrating, this fact does not discourage the Bat-fans, who flock to the comics stores to celebrate one of the most renowned heroes of the day.

Sadly, they almost completely ignore the real heroes of our times: people like Bill Gates, Warren Buffett, Mark Zuckerberg who have pledged to give at least half of their fortunes to charity, along with more than 135 other billionaires who have signed a similar pledge.

Bill and his wife Melinda alone have pledged over $30 billion to various charities, and have founded the Bill & Melinda Gates Foundation which continually gives out grants and monetary assistance to demolish poverty, hunger and disease worldwide. Warren Buffett has pledged a similar amount of $30 billion to the Foundation as well. According to an infographic from 2012, the Gates’ generosity has saved almost six million lives by brining vaccinations and improving healthcare internationally.

So why is it that we view Batman in such a high esteem, while pretty much ignoring Bill, Melinda, Buffett and other billionaires?

To understand the reasons we need to go back in time, and view history in the form of waves, as noted futurist Alvin Toffler have done in his book from the 1980s – The Third Wave.

The Third Wave: a masterpiece about the future, or rather the present of current times, since the book was published in 1980. Highly recommended!
The Third Wave: a masterpiece about the future, or rather the present of current times, since the book was published in 1980. Highly recommended!

A History of Waves

In his (highly recommended) classic, Toffler has described two waves that have swept over humanity and have created new civilizations. The First Wave was the one that replaced hunter-gatherer societies with agricultural ones. The Second Wave was the Industrial Revolution, which has led to standardization and centralization of manufacturing and governance. And the Third Wave, which we are experiencing right now, is leading to the creation of the post-industrial society, in which wealth is measured in information, and not necessarily in physical products (to understand that, consider that Google is making approximately $30 billion just by selling and rerouting information).

The interesting thing about those waves is that while people change their lifestyle, their consciousness and culture remains largely ‘stuck’ in the previous waves. In fact, we all still firmly adhere in our mentality to the era before the agricultural wave (the First Wave), when the heroes and top-guns were the chieftains and the hunters. And what distinguished them? They had big and bulging muscles, and were largely the macho types, competing constantly among themselves over who’s stronger.

In other words, they were largely the archetype of all comics, anime and manga superheroes.

A typical superhero - strong and assumedly fertile. Image originally taken from Progressiveboink
A typical superhero – strong and assumedly fertile.
Image originally taken from Progressiveboink

We see the affection for the big and macho types in many other places. Jared Diamond has described in his masterpiece Collapse, that Australians still view the cowboys and lone farmers with great affection and as the “ideal Australians”. Similarly, a soldier from the Marine Corps is enjoying a far greater prestige than a cyber-hacker, despite the fact that the latter is almost certainly more influential. The same applies to military unmanned aerial vehicles controllers, who are being ridiculed by the ‘real pilots’.

In other words, we are all still mentally fastened to an era that precedes even the First Wave – more than 10,000 years ago. The principles of that time, which are largely in contradiction to the way the world works today, include –

  • Might: Brawn over brains;
  • Materialism: Materials (food, money in your hand) are more important than information (money in your virtual bank account);
  • Wholeness: Individuals and groups are valued by the work they do themselves, while those who outsource labor are considered lazy or money mongers.
  • Cleanliness: Occupations that deal with ‘dirty’ jobs, like cleaning human excrement or handling the garbage bins on the streets, are considered much less prestigious than most other occupations – even though they may be high-earning professions, and certainly important for society wellbeing.

Leaving the Past behind Us

Can we leave our evolutionary history behind us, and move forward to a more progressive future? I believe we can. Our brains may be largely wired in the same way they were 10,000 years ago, but we have a large advantage over nature: our consciousness and ability to essentially rewire our own brain just by thinking and comprehending new ideas.

My friend Yaron Assa demonstrated how we can transcend our old ways of thinking, in a lesson he gave in my course about foresight and forecasting. He showed the audience two lines, and asked them which is longer (you can see the challenge below). Everybody sniggered, and told Yaron that both lines are just as long – and that it’s an old and well-known visual illusion. To which Yaron calmly explained that they have just now proved that human beings can recognize their biases – even ones based on the brain’s wiring and visualization systems – and overlook them, if only they know about them in advance.

The Muller-Lyer optical illusion. Which line is longer - A or B?  Originally from "Us and Them" article
The Muller-Lyer optical illusion. Which line is longer – A or B?
Originally from “Us and Them” article

I believe we can put the past behind us. Not completely, of course, but largely so. We’re already on that path right now. Many of our best comics superheroes turn out into geeks: Hank Pym (Antman) and Bruce Banner (the Hulk) are brilliant scientists, Tony Stark (Ironman) is a super-engineer, Batman is constantly re-engineering his equipment, and so on. So maybe we’re beginning to overcome the idea that brawn overcomes brains.

Are brains starting to become more valued than brawn? Originally from Comicvine
Are brains starting to become more valued than brawn?
Originally from Comicvine

In summary, we can leave some of the past behind us, but before we do that, we have to recognize just how much we cling to it. So while you’re celebrating this Batman Day out there with your capes and gloomy looks, don’t forget the real heroes of our times – the ones who are not macho, who don’t have bulging muscles, and yet like Batman they largely hide behind the scenery and do the dirty work that nobody likes to think about.