The Activated World: from Solar Power to Food


Solar panels are undergoing rapid evolution in the last ten years. I’ve written about this in previous posts in the blog (see for example the forecast that we’ll have flying cars by 2035, which is largely dependent on the sun providing us with an abundance of electricity). The graph below is pretty much saying it all: the cost for producing just one watt of solar energy has gone down to somewhere between 1 percent and 0.5 percent of what it used to be just forty years ago.

At the same time that prices go down, we see more installations of solar panels worldwide, roughly doubling every 2-3 years. Worldwide solar capacity in 2014 has been 53 times higher than in 2005, and global solar photovoltaic installations grew 34% in 2015 according to GTM Research.

Source: GTM Research

It should come as no surprise that regulators are beginning to take note of the solar trend. Indeed, two small California cities – Lancastar and Sebastopol – passed laws in 2013 requiring new houses to include solar panels on their roofs. And now, finally, San Francisco joins the fray as the first large city in the world to require solar panels on every new building.

San Francisco has a lofty goal: meeting all of its energy demands by 2025, using renewable sources only. The new law seems to be one more step towards that achievement. But more than that, the law is part of a larger principle, which encompasses the Internet of Things as well: the Activation of Everything.


The Activation of Everything

To understand the concept of the Activation of Everything, we need to consider another promising legislation that will be introduced soon in San Francisco by Supervisor Scott Wiener. Supervisor Wiener is allowing solar roofs to be replaced with living roofs – roofs that are covered with soil and vegetation. According to a 2005 study, living roofs reduce cooling loads by 50-90 percent, and reduce stormwater waste and runoff to the sewage. They retain much of the rainwater, which later goes back to the atmosphere through evaporation. They enhance biodiversity, sequester carbon and even capture pollution. Of course, not every plant can be grown efficiently on such roofs – particularly not in dry California – but there’s little doubt that optimized living roofs can contribute to the city’s environment.

Supervisor Wiener explains the reasons behind the solar power legislation in the following words –

“This legislation will activate our roofs, which are an under-utilized urban resource, to make our City more sustainable and our air cleaner. In a dense, urban environment, we need to be smart and efficient about how we maximize the use of our space to achieve goals like promoting renewable energy and improving our environment.”

Pay attention to the “activate our roofs” part. Supervisor Wiener is absolutely right in that the roofs are an under-utilized urban resource. Whether you want to use those roofs to harvest solar power or to grow plants and improve the environment, the idea is clear. We need to activate – in any means possible – our resources, so that we maximize their use.

A living roof in lower Manhattan. Source: Alyson Hurt, Flickr

That is what the Activation of Everything principle means: activate everything, whether by allowing surfaces and items to harvest power or resources, or to have sensing and communication capabilities. In a way, activation can also mean convergence: take two functions or services that were performed separately in the past, and allow them to be performed together. In that way, a roof is no longer just a means to provide shade and protection from the weather, but can also harvest energy and improve the environment.

The Internet of Things is a spectacular example for implementing the Activation of Everything principle. In the Internet of Things world, everything will be connected: every roof, every wall, every bridge and shirt and shoe. Every item will be activated to have added purposes. Our shirts will communicate our respiration rate to our physicians. Bricks in walls will report on their structural integrity to engineers. Bridges will let us know that they’re close to maximum capacity, and so on.

The Internet of Things largely relies on sophisticated electronic technologies, but the Activation of Everything principle is more general than that. The Activation of Everything can also mean creating solar or living roofs, or even creating walls that include limestone-secreting bacteria that can fix cracks as soon as they form.

Where else can we implement the Activation of Everything principle in the future?


The Activation of Cars

There have been many ideas to create roads that can harvest energy from cars’ movements. Unfortunately, the Laws of Thermodynamics reveal that such roads will in fact ‘steal’ that energy from passing cars, by making it more difficult for them to travel along the road. Not a good idea. The activation of roofs works well specifically because it has a good ROI (Return on Investment), with a relatively low energetic investment and large returns. Not so with energy-stealing roads.

But there’s another unutilized resource in cars – the roof. We can use the Activation principle to derive insights about the future of car roofs: hybrid cars will be covered with solar panels, which will be used to harvest energy when they’re sitting in the parking lot, and store it for the ride home.

Don’t get the math wrong: cars with solar roofs won’t be able to drive endlessly. In fact, if they rely only on solar power, they’ll barely even crawl. However, they will be able to power the electrical devices in the car, and trucks may even use solar energy on long journeys, to cool the wares they carry. If the cost of solar panel installation continues to go down, these uses could be viable within the decade.


The Activation of Farmlands

Farmlands are being activated today in many different ways: from sensors all over the field, and sometimes in every tree trunk, to farmers supplementing their livelihood by deploying solar panels and ‘farming electricity’. Some are combining both solar panels and crop and animal farming by spreading solar panels at a few meters height above the field, and growing plants that can make the most of the limited sunlight that gets to them.


Anna Freund run the Open View Farm. Source: VPR

The Activation of the Air

Even the air around us can be activated. Aerial drones may be considered an initial attempt to activate the sky by filling them with flying sensors, but they are large, cumbersome and interfere with aerial traffic and with the view. However, we’ll be able to activate air in various other ways in the future, such as smart dust – extremely small sensors with limited wireless connectivity that will transmit data about their whereabouts and the conditions there.


The Activation of Food

Food is one of the only things that have barely been activated so far. Food today serves only two goals: to please by tasting great, and to nourish the body. According to the principle of Activation, however, food will soon serve several other purposes. Food items could be used to deliver therapeutics or sensors into the body, or possibly be produced with built-in biocompatible electronics and LEDs to make the food look better on the plate.

Activated food: a banana with an edible food sensor, developed by researchers at Tufts University. Source:


As human beings, we’ve always searched for ways to optimize efficiency and to make the best use of the limited resources we have. One of those limited resources is space, which is why we try to activate – add functions – to every surface and item today.

It’s fascinating to consider how the Activation of Everything will shape our world in the next few decades. We will have sensors everywhere, solar panels everywhere, batteries and electronics everywhere. It will be a world where nothing is as it seems at first glance anymore. An activated world – a living world indeed.


The Citizens Who Solve the World’s Problems

It’s always nice when news items that support each other and indicate a certain future appear in the same week, especially when each of them is exciting on its own. Last week we’ve seen this happening with three different news items:

  1. A scientific finding that a single bacteria type grows 60 percent better in space than on Earth. The germs used in the experiment were collected by the public;
  2. A new Kickstarter project for the creation of a DNA laboratory for everyone;
  3. A new project proposed on a crowdfunding platform, requesting public support for developing the means for rapid detection of Zika virus without the need for a laboratory in Brazil.

Let’s go over each to see how they all come together.


Space Microbes

Between the years 2012 and 2014, citizens throughout the United States collected bacteria samples from their environment using cotton swabs, and mailed them to the University of California Davis. Out of the large number of samples that arrived at the lab, 48 strains of germs were isolated and selected to be sent to space, on board the International Space Station (ISS). Most of the bacterial strains behaved similarly on Earth and in space. One strain, however, surpassed all expectations and proliferated rapidly, growing 60% better in space.

Does this mean that the bacteria, going by the name of Bacillus safensis, is better adapted for life in space? I would stay wary of such assertions. We don’t know yet whether the improved growth was a result of the micro-gravity conditions in the space stations, or of some other unquantified factor. It is entirely possible that the levels of humidity, oxygen concentrations, or the quality of the medium were somehow altered or changed on the space station. The result, in short, could easily be a fluke rather than an indicator that some bacteria can grow better in micro-gravity. We’ll have to wait for further evidence before reaching a final conclusion on this issue.

The most exciting thing for me here is that the bacteria in question was collected by the public, in a demonstration of the power of citizen science. People from all over America took part in the project, and as a result of their combined effort, the scientists ended up with a large number of strains, some of which they probably would not have thought to use in the first place. This is one of the main strengths of citizen science: providing many samples of research material for the scientists to analyze and experiment on.

space bell.jpg
Study author Darlene Cavalier swabs the crack of the Liberty Bell to collect bacterial samples. Credit: CC by 4.0

DNA Labs for Everyone

Have you always wanted to check your own DNA? To find out whether you have a certain variant of a gene, or identify the animals whose meat appears in your hamburger? Well, now you can do that easily by ordering the Bento Lab: “A DNA laboratory for everyone”.

The laptop-sized lab includes a centrifuge for the extraction of DNA from biological samples, a PCR thermocycler to target specific DNA sequences, and an illuminated gel unit to visualize the results and ascertain whether or not the sample contains the DNA sequence you were looking after. All that, for the price of less than one thousand dollars. This is ridiculously cheap, particularly when you understand that similar lab equipment easily have cost tens of thousands of dollars just twenty years ago.

The Bento Lab - Citizen Science for DNA analysis
The Bento Lab

The Kickstarter project has already gained support from 395 backers, pledging nearly $150,000 to the cause, and surpassing the goal by 250% in just ten days. That’s an amazing progress for a project that’s really only suitable for hard-core makers and bio-hackers.

Why is the Bento Lab so exciting? Because it gives power to the people. The current model is very limited, but the next versions of mobile labs will contain better equipment and provide better capabilities to the bio-hackers who purchase them. You don’t have to be a futurist to say that – already there are other projects attempting to bring CRISPR technology for highly-efficient gene editing to the masses.

This, then, is a great example for the ways citizen science is going to keep on evolving: people won’t just collect bacterial samples in the streets and send them to distinguished scientists. Instead, private people – joes shmoes like you and me – will be able to experiment on these bacteria in their homes and garages.

Should you be scared? Obviously, yeah. The power to re-engineer biology is nothing to scoff at, and we will need to think up ways to regulate public bio-engineering. However, the public could also use this kind of power to contribute to scientific projects around the world, to conduct DNA sequencing of one’s own genetics, and eventually to create biological therapeutics in one’s own house.

Which brings us to the last news item I wanted to write about in this post: citizens developing means for rapid detection of Zika virus.


Entrepreneurs against Viruses

The Zika virus has begun spreading rapidly in Brazil, with devastating consequences. The virus can spread from pregnant women to their fetuses, and has been linked to a serious birth defect of the brain called microcephaly in babies. According to the Center for Disease Control and Prevention, the virus likely will continue to spread to new areas.

Despite the fact that the World Health Organization declared Zika virus a public health emergency merely two months ago, citizen scientists are already working diligently to develop new ways to detect the virus. A UK-IL-BR team has sprung up, with young biotech entrepreneurs leading and doing research to create a better system for rapid detection of the virus in human beings and mosquitos. The group is now requesting the public to chip in and back the project, and has already gathered nearly $6,000.

This initiative is a result of the movement that brings the capabilities to do science to everyone. When every citizen armed with an undergraduate degree in biology can do science in his or her home, we shouldn’t be surprised when new methods for the detection of viruses crop up in distant places around the world. We’re basically decentralizing the scientific community – and as a result can have many more people working on strange and wonderful ideas, some of which will actually bear fruit to the benefit of all.



As scientific devices and appliances become cheaper and make their way to the hands of individuals around the world, citizen science becomes more popular and provides ever greater impact. Today we see the uprising of the citizen scientists – those that are not supported by universities or research centers, but instead start conducting experiments in their homes.

In a decade from now, we will see at least one therapeutic being manufactured by citizen scientists in an easy and cheap manner that will undermine the expensive prices demanded by pharma companies for their drugs. Heck, even kids would be able to deliver that kind of science in garage labs. Less than a decade later, we will witness citizen scientists actually conducting medical research on their own, by running analysis over medical records of hundreds – maybe millions – of people to uncover how new or existing therapeutics can be used to treat certain medical conditions. Many of these research projects will not be supported by the government or big pharma with the intent to make money, but will instead be supported by the public itself on crowdfunding sites.

Of course, for all that to happen we need to support citizen scientists today. So go ahead – contribute to the campaign against Zika, or purchase a Bento Lab for your kitchen, or find a citizen science projects or games for kids you can join in SciStarter. We all can take part in improving science, together.


Visit other posts in my blog about crowdfunding projects, such as Robit: A new contender in the field of house robots; or read my analysis Why crowdfunding scams are good for society.

Science Just Wants To Be Free

This article was originally published in the Huffington Post


For a long time now, scientists were held in thrall by publishers. They worked voluntarily – without getting any pay – as editors and reviewers for the publishers, and they allowed their research to be published in scientific journals without receiving anything out of it. No wonder that scientific publishing had been considered a lucrative business.

Well, that’s no longer the case. Now, scientific publishers are struggling to maintain their stranglehold over scientists. If they succeed, science and the pace of progress will take a hit. Luckily, the entire scientific landscape is turning on them – but a little support from the public will go a long way in ensuring the eventual downfall of an institute that is no longer relevant or useful for society.

To understand why things are changing, we need to look back in history to 1665, when the British Royal Society began publishing research results in a journal form called Philosophical Transactions of the Royal Society. Since the number of pages available in each issue was limited, the editors could only pick the most interesting and credible papers to appear in the journal. As a result, scientists from all over Britain fought to have their research published in the journal, and any scientist whose research was published in an issue gained immediate recognition throughout Britain. Scientists were even willing to become editors for scientific journals, since that was a position that demanded request – and provided them power to push their views and agendas in science.

Thus was the deal struck between scientific publishers and scientists: the journals provided a platform for the scientists to present their research, and the scientists fought tooth and nail to have their papers accepted into the journals – often paying from their pockets for it to happen. The journals publishers then had full copyrights over the papers, to ensure that the same paper would not be published in a competing journal.

That, at least, was the old way for publishing scientific research. The reason that the journal publishers were so successful in the 20th century was that they acted as aggregators and selectors of knowledge. They employed the best scientists in the world as editors (almost always for free) to select the best papers, and they aggregated together all the necessary publishing processes in one place.

And then the internet appeared, along with a host of other automated processes that let every scientist publish and disseminate a new paper with minimal effort. Suddenly, publishing a new scientific paper and making the scientific community aware of it, could have a radical new price tag: it could be completely free.

Free Science

Let’s go through the process of publishing a research paper, and see how easy and effortless it became:

  1. The scientist sends the paper to the journal: Can now be conducted easily through the internet, with no cost for mail delivery.
  2. The paper is rerouted to the editor dealing with the paper’s topic: This is done automatically, since the authors specify certain keywords which make sure the right editor gets the paper automatically to her e-mail. Since the editor is actually a scientist volunteering to do the work for the publisher, there’s no cost attached anyway. Neither is there need for a human secretary to spend time and effort on cataloguing papers and sending them to editors manually.
  3. The editor sends the paper to specific scientific reviewers: All the reviewers are working for free, so the publishers don’t spend any money there either.

Let’s assume that the paper was confirmed, and is going to appear in the journal. Now the publisher must:

  1. Paginate, proofread, typeset, and ensure the use of proper graphics in the paper: These tasks are now performed nearly automatically using word processing programs, and are usually handled by the original authors of the paper.
  2. Print and distribute the journal: This is the only step that costs actual money by necessity, since it is performed in the physical world, and atoms are notoriously more expensive than bits. But do we even need this step anymore? I have been walking around in the corridors of the academy for more than ten years, and I’ve yet to see a scientist with his nose buried in a printed journal. Instead, scientists are reading the papers on their computer screens, or printing them in their offices. The mass-printed version is almost completely redundant. There is simply no need for it.

In conclusion, it’s easy to see that while the publishers served an important role in science a few decades ago, they are just not necessary today. The above steps can easily be conducted by community-managed sites like Arxive, and even the selection process of high quality papers can be performed today by the scientist themselves, in forums like Faculty of 1000.

The publishers have become redundant. But worse than that: they are damaging the progress of science and technology.

The New Producers of Knowledge

In a few years from now, the producers of knowledge will not be human scientists but computer programs and algorithms. Programs like IBM’s Watson will skim through hundreds of thousands of research papers and derive new meanings and insights from them. This would be an entirely new field of scientific research: retrospective research.

Computerized retrospective research is happening right now. A new model in developmental biology, for example, was discovered by an artificial intelligence engine that went over just 16 experiments published in the past. Imagine what would happen when AI algorithms cross and match together thousands papers from different disciplines, and come up with new theories and models that are supported by the research of thousands of scientists from the past!

For that to happen, however, the programs need to be able to go over the vast number of research papers out there, most of which are copyrighted, and held in the hands of the publishers.

You may say this is not a real problem. After all, IBM and other large data companies can easily cover the millions of dollars which the publishers will demand annually for access to the scientific content. What will the academic researchers do, though? Many of them do not enjoy the backing of the big industry, and will not have access to scientific data from the past. Even top academic institutes like Harvard University find themselves hard-pressed to cover the annual costs demanded by the publishers for accessing papers from the past.

Many ventures for using this data are based on the assumption that information is essentially free. We know that Google is wary of uploading scanned books from the last few decades, even if these books are no longer in circulation. Google doesn’t want to be sued by the copyrights holders – and thus is waiting for the copyrights to expire before it uploads the entire book – and lets the public enjoy it for free. So many free projects could be conducted to derive scientific insights from literally millions of research papers from the past. Are we really going to wait for nearly a hundred years before we can use all that knowledge? Knowledge, I should mention, that was gathered by scientists funded by the public – and should thus remain in the hands of the public.


What Can We Do?

Scientific publishers are slowly dying, while free publication and open access to papers are becoming the norm. The process of transition, though, is going to take a long time still, and provides no easy and immediate solution for all those millions of research papers from the last century. What can we do about them?

Here’s one proposal. It’s radical, but it highlights one possible way of action: have the government, or an international coalition of governments, purchase the copyrights for all copyrighted scientific papers, and open them to the public. The venture will cost a few billion dollars, true, but it will only have to occur once for the entire scientific publishing field to change its face. It will set to right the ancient wrong of hiding research under paywalls. That wrong was necessary in the past when we needed the publishers, but now there is simply no justification for it. Most importantly, this move will mean that science can accelerate its pace by easily relying on the roots cultivated by past generations of scientists.

If governments don’t do that, the public will. Already we see the rise of websites like Sci-Hub, which provide free (i.e. pirated) access to more than 47 million research papers. Having been persecuted by both the publishers and the government, Sci-Hub has just recently been forced to move to the Darknet, which is the dark and anonymous section of the internet. Scientists who will want to browse through past research results – that were almost entirely paid for by the public – will thus have to move over to the Darknet, which is where weapon smugglers, pedophiles and drug dealers lurk today. That’s a sad turn of events that should make you think. Just be careful not to sell your thoughts to the scholarly publishers, or they may never see the light of day.


Dr Roey Tzezana is a senior analyst at Wikistrat, an academic manager of foresight courses at Tel Aviv University, blogger at Curating The Future, the director of the Simpolitix project for political forecasting, and founder of TeleBuddy.

The Future of Genetic Engineering: Following the Eight Pathways of Technological Advancement

The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?

In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.

flash foresight

Eight Pathways of Technological Advancement

Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.

Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.

First Pathway: Dematerialization

By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.

Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.

Second Pathway: Virtualization

Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.

crash dummies
Thanks to virtualization, crash dummies everywhere can relax. Image originally from @TheCrashDummies.

Third Pathway: Mobility

Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.

Fourth Pathway: Product Intelligence

This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

Fifth Pathway: Networking

The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.

Sixth Pathway: Interactivity

As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.

Seventh Pathway: Globalization

Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.

Eighth Pathway: Convergence

Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.


How could genetic engineering progress along the Eight Pathways of technological improvement?


Pathways for Genetic Engineering

First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.

As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.

A screenshot from Genome Compiler

How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.

If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.

Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.

Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.

Well, then, what would a convergence of all the above pathways look like?


The Convergence of Genetic Engineering

Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.

What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.



If you’ve been reading this blog for some time, you may have noticed a recurring pattern: I’ll be inquiring into a certain subject, and then analyzing it according to a certain foresight methodology. Such posts have covered so far the Business Theory of Disruption (used to analyze the future of collectible card games), Causal Layered Analysis (used to analyze the future of aerial drones and of medical mistakes) and Pace Layer Thinking. I hope to go on giving you some orderly and proven methodologies that help thinking about the future.

How you actually use these methodologies in your business, class or salon talk – well, that’s up to you.



Why “Magic: the Gathering” is Doomed: Lessons from the Business Theory of Disruption

Twenty years ago, when I was young and beautiful, I picked up a wrapped pack of cards in a computer games store, and read for the first time the tag “Magic: the Gathering”. That was the beginning of my long-time romance with the collectible card game. I imported the game to Israel, translated the rules leaflet to Hebrew for my friends, and went on to play semi-professionally for twenty years, up to the point when I became the Israeli champion. The game has pretty much shaped my years as a teenager, and has helped me make friends and meet interesting people from all over the world.

That is why it’s so sad to me to see the state of the game right now, and realize that it is almost certainly doomed to fail in the long run.


Magic: The Gathering. The game that has bankrupt thousands of parents.


The Rise and Decline of Magic the Gathering

Make no mistake: Magic the Gathering (just Magic in short) is still the top dog among collectible card games in the physical world. According to a report released in 2014, the annual revenue from Magic has grown by 182% between 2009 and 2014, reaching a total value of around $250 million a year. That’s a lot of money, to be sure.

The only problem is that Hearthstone, a digital card game released in the beginning of 2014, has reached annual revenues of around $240 million, in less than two years. I will not be surprised to see the numbers growing even larger that in the future.

This is a bizarre situation. Wizards of the Coast (WotC), the company behind Magic, had twenty years to take the game online and turn it into a success. They failed miserably, and their meager attempts at became a target for scorn and ridicule from players worldwide. While WotC did create an online platform to play Magic on, there were plenty of complaints: for starters, playing was extremely costly since the virtual card packs generally cost the same as packs in the physical world. An evening of playing a draft – a small tournament with only eight players – would’ve cost each player around ten dollars, and would’ve required a time investment of up to four straight hours, much of it wasted in waiting for the other players in the tournament to finish their matches with each other and move on to the next round.

These issues meant that Magic Online was mostly reserved for the top players, who had the money and the willingness to spend it on the game. WotC was aware of the disgruntlement about the state of things, but chose to do nothing – after all, it had no real contenders in the physical or the digital market. What did it have to fear? It had no real reason to change. In fact, the only smart decision WotC managers could take was NOT to take a risk and try to change the online experience, but to keep on making money – and lots of it – from a game that functioned well enough. And they could continue doing so until their business was rudely and abruptly disrupted.


The Business Theory of Disruption

The theory of disruption was originally conceived by Harvard Business School professor Clayton M. Christensen, and described in his best-selling book The Innovator’s Dilemma. Christensen has followed the evolution of several industries, particularly hard drives, but also including metalworking, retail stores and tractors. He found out that in each sector, the managers supported research and development, but all that R&D produced only two general kinds of innovations: sustaining innovations and disruptive ones.


The sustaining innovations were generally those that the customers asked for: increasing hard drive storage capacity, or making data retrieval faster. They led to obvious improvements, which brought immediate and clear benefit to the company in a competitive market.

The disruptive innovations, on the other hand, were those that completely changed the picture, and actually had a good potential to cost the company money in the short-term. Furthermore, the customers saw little value in them, and so the managers saw no advantage in pursuing these innovations. The company-employed engineers who came up with the ideas for disruptive innovations simply couldn’t find support for them in the company.

A good example for the process of disruption is that of the hard drive industry, a few years before the transition from 8-inch drives to 5.25-inch drives occurred. A quick look at the following parameters of the two contenders, back in 1981, explains immediately why managers in the 8-inch drive manufacturing companies were wary of switching over to the 5.25-inch drive market. The 5.25-inch drives were simply inefficient, and lost the competition with 8-inch drives in almost every parameter, except for their size! And while size is obviously important, the computer market at the time consisted mainly of “minicomputers” – computers that cost ~$25,000, and were the size of a small refrigerator. At that size, the physical volume of the hard drives was simply irrelevant.

Attribute 8-Inch Drives (Minicomputer Market) 5.25-Inch Drives (Desktop Computer Market)
Capacity (megabytes) 60 10
Physical volume (cubic inches) 566 150
Weight (pounds) 21 6
Access time (milliseconds) 30 160
Cost per megabyte $50 $200
Unit cost $3,000 $2,000

The table has been copied from the book The Innovator’s Dilemma by Clayton M. Christensen.

And so, 8-inch drive companies continued to focus on 8-inch drives, while a few renegade engineers opened new companies and worked hard on developing better 5.25-inch drives. In a few years, the 5.25-inch drives were just as efficient as the 8-inch drives, and a new market formed: that of the personal desktop computer. Suddenly, every computer maker in the market needed 5.25-inch drives.

One of the first minicomputers. On display at the Vienna Technical Museum. Image found on Wikipedia.

Now, the 8-inch drive company managers were far from stupid or ignorant. When they saw that there was a market for 5.25-inch drives, they decided to leap on the opportunity as well, and develop their own 5.25-inch drives. Sadly, they were too late. They discovered that it takes time and effort to become acquainted with the demands of the new market, to adapt their manufacturing machinery and to change the entire company’s workflow in order to produce and supply computer makers with 5.25 drives. They joined the competition far too late, and even though they were the leviathans of the industry just ten years ago, they soon sunk to the bottom and were driven out of business.

What happened to the engineers who drove forward the 5.25-inch drives revolution, you may ask? They became CEOs of the new 5.25-inch drive manufacturing companies. A few years later, when their own young engineers came to them and suggested that they invest in developing the new and faulty 3.5-inch drives, they decided that there was no market for this invention right now, no demand for it, and that it’s too inefficient anyway.

Care to guess what happened next? Ten years later, the 3.5-inch drives took over, portable computers utilizing them were everywhere, and the 5.25-inch drive companies crumbled away.

That is the essence of disruption: decisions that make sense in the present, are clearly incorrect in the long term, when markets change. Companies that relax and only invest in sustaining innovations instead of trying to radically change their products and reshape the markets themselves, are doomed to fail. In Peter Diamandis words –

“If you aren’t disrupting yourself, someone else is.”

Now that you understand the basics of the Theory of Disruption, let’s see how it applies to Magic.


Magic and Disruption

Wizards of the Coast has been making almost exclusively sustaining improvements over the last twenty years: its talented R&D team focused almost exclusively on releasing new expansions with new cards and new playing mechanics. WotC also tried to disrupt themselves once by creating the Magic Online platform, but failed to support and nurture this disruptive innovation. The online platform remained mainly as an outdated relic – a relic that made money, to be sure, but was slowly becoming irrelevant in the online world of collectible card games.

In the last five years, many other collectible card games reared their heads online, including minor successes like Shadow Era (200,000 players, ~$156,000 annual revenue) and Urban Rivals (estimated ~$140,000 annual revenue). Each of the above made discoveries in the online world: they realized that players need to be offered cards for free, that they need to be lured to play every day, and that the free-to-play model can still prove profitable since the company’s costs are close to zero: the firm doesn’t need to physically print new cards or to distribute them to retailers. But these upstarts were still so small that WotC could afford to effectively ignore them. They didn’t pose a real threat to Magic.

Then Hearthstone burst into existence in 2014, and everything changed.


Hearthstone’s developers took the best traits of Magic and combined it with all the insights the online gaming industry has developed over recent years. They made the game essentially free to play to attract a large number of players, understanding that their revenues would come from the small fraction of players who spent some money on the game. They minimized time waste by setting a time limit on every player’s turn, and by establishing a rule that players can only act during their own turn (so there’s no need to wait for the other player’s response after every move). They even broke down the Magic draft tournaments of eight people, and made it so that every player who drafted a deck can now play against any other player who drafted a deck at any time. There’s no time waste in Hearthstone – just games to play and fun to be had.

WotC was still deep asleep at that time. In July 2014, Magic brand manager Liz Lamb-Ferro told GamesBeat that –

“If you’re looking for just that immediate face-to-face, back-and-forth action-based game with not a lot of depth to it, then you can find that. … But if you want the extras … then you’re eventually going to find your way to Magic.”

Lamb-Ferro was right – Hearthstone IS a simpler game – but that simplicity streamlines gameplay, and thus makes the game more rapid and enjoyable to many players. And even if we were to accept that Hearthstone does not attract veteran players who “want the extras” (actually, it does), WotC should have realized that other online collectible card games would soon combine Magic’s sophistication with Hearthstone’s mechanisms for streamlining gameplay. And indeed, in 2014 a new game – SolForge – has taken all of the strengths of Hearthstone, while adding a mechanic of card transformation (each card transforming into three different versions of itself) that could only have been possible in card games played online. SolForge doesn’t even have a physical version and could never have one, and the game is already costing Magic a few more veteran players.

This is the point when WotC began realizing that they’re falling far behind the curve. And so, in the middle of 2015 they have released Duels of the Planeswalkers 2016. I won’t even bother detailing all the infuriating problems with the game. Suffice it to say that it has garnered more negative reviews than positive ones, and made clear that WotC were still lagging far behind their competitors in their understanding of the virtual world, user experience, and what players actually want. In short, WotC found themselves in the position of the 8-inch drive manufacturers, realizing suddenly that the market has changed under their noses in less than two years.


What Could WotC do?

The sad truth is that WotC can probably do nothing right now to fix Magic. The firm can continue churning out sustaining improvements – new expansions and new exciting cards – but it will find itself hard pressed to take over the digital landscape. Magic is a game that was designed for the physical world, and not for the current frenzied pace of the virtual collectible card games. Magic simply isn’t suitable for the new market, unless WotC changes the rules so much that it’s no longer the same game.

Could WotC change the rules in such a dramatic fashion? Yes, but at a great cost. The company could recreate the game online with new cards and rules, but it would have to invest time and effort in relearning the workings of the virtual world and creating a new platform for the revised Magic. Unfortunately, it’s not clear that WotC will have time to do that with Hearthstone, SolForge and a horde of other card games snarling at its heels. The future of Magic online does not look bright, to say the least.

Does that mean Magic the Gathering will vanish completely? Probably not. The Magic brand is still strong everywhere except for the virtual world, which means that in the next five years the game will remain in existence mostly in the physical world, where it will bring much joy to children in school breaks, and much money to the pockets of WotC. During these five years, WotC will have the opportunity to rethink and recreate the game for the next big market: virtual and augmented reality. If the firm succeeds in that front, it’s possible that Magic will be reinvented for the new-new market. If it fails and elects to keep the game anchored only in the physical world, then Magic will slowly but surely vanish away as the market changes and new and exciting games take over the attention span of the next generation.

That’s what happens when you disregard the Theory of Disruption.


Nano-Technology and Magical Cups

When I first read about the invention of the Right Cup, it seemed to me like magic. You fill the cup with water, raise it to your mouth to take a sip – and immediately discover that the water has turned into orange juice. At least, that’s what your senses tell you, and the Isaac Lavi, Right Cup’s inventor, seems to be a master at fooling the senses.

Lavi got the idea for the Right Cup some years ago, when he was diagnoses with diabetes at the age of 30. His new condition meant that he had to let go of all sugary beverages, and was forced to drink only plain water. As an expert in the field of scent marketing, however, Lavi thought up of a new solution to the problem: adding scent molecules to the cup itself, which will trick your nose and brain into thinking that you’re actually drinking fruit-flavored water instead of plain water. This new invention can now be purchased on Indiegogo, and hopefully it even works.


right cup.jpg
The Right Cup – fooling you into thinking that plain water tastes like fruit.


“My two diabetic parents are drinking from this cup for the last year and a half.” Lavi told me in an e-meeting we had last week, “and I saw that in taste testing in preschool, kids drank from these cups and then asked for more ‘orange juice’. And I told myself that – Wow, it works!”

What does the Right Cup mean for the future?

A Future of Nano-technology

First and foremost, the Right Cup is one result of all the massive investments in nano-technology research made in the last fifteen years.

“Between 2001 and 2013, the U.S. federal government funneled nearly $18 billion into nanotechnology research… [and] The Obama administration requested an additional $1.7 billion for 2014.” Writes Martin Ford in his 2015 book Rise of the Robots. These billions of dollars produced, among other results, new understandings about the release of micro- and nano-particles from polymers, and the ways in which molecules in general react with the receptors in our noses. In short, they enabled the creation of the Right Cup.

There’s a good lesson to be learned here. When our leaders justified their investments in nano-technology, they talked to us about the eradication of cancer via drug delivery mechanisms, or about bridges held by cobwebs of carbon nanotubes. Some of these ideas will be fulfilled, for sure, but before that happens we might all find ourselves enjoying the more mundane benefits of drinking Illusory orange-flavored water. We can never tell exactly where the future will lead us: we can invest in the technology, but eventually innovators and entrepreneurs will take those innovations and put them to unexpected uses.

All the same, if I had to guess I would imagine many other uses for similar ‘Right Cups’. Kids in Africa could use cups or even straws which deliver tastes, smells and even more importantly – therapeutics – directly to their lungs. Consider, for example, a ‘vaccination cup’ that delivers certain antigens to the lungs and thereby creates an immune reaction that could last for years. This idea brings back to mind the Lucky Iron Fish we discussed in a previous post, and shows how small inventions like this one can make a big difference in people’s lives and health.


A Future of Self-Reliance

It is already clear that we are rushing headlong into a future of rapid manufacturing, in which people can enjoy services and production processes in their households that were reserved for large factories and offices in the past. We can all make copies of documents today with our printer/scanner instead of going to the store, and can print pictures instead of waiting for them to be developed at a specialized venue. In short, technology is helping us be more geographically self-reliant – we don’t have to travel anymore to enjoy many services, as long as we are connected to the digital world through the internet. The internet provides information, and end-user devices produce the physical result. This trend will only progress further as 3D printers become more widespread in households.

The Right Cup is another example for a future of self-reliance. Instead of going to the supermarket and purchasing orange juice, you can buy the cup just once and it will provide you with flavored water for the next 6-9 months. But why stop here?

Take the Right Cup of a few years ahead and connect it to the internet, and you have the new big product: a programmable cup. This cup will have a cartridge of dozens of scent molecules, each of which can be released at different paces, and in combination with the other scents. You don’t like orange-flavored water? No problem. Just connect the cup to the World Wide Web and download the new set of instructions that will cause the cup to release a different combination of scents so that your water now tastes like cinnamon flavored apple cider, or any other combinations of tastes you can think of – including some that don’t exist today.


A Future of Disruption?

As with any innovation and product proposed on crowdfunding platforms, it’s difficult to know whether the Right Cup will stand up to its hype. As of now the project has received more than $100,000 – more than 200% of the goal they put up. Should the Right Cup prove itself taste-wise, it could become an alternative to many light beverages – particularly if it’s cheap and long-lasting enough.

Personally, I don’t see Coca-Cola, Pepsi and orchard owners going into panic anytime soon, and neither does Lavi, who believes that the beverage industry is “much too large and has too many advertising resources for us to compete with them in the initial stages.” All the same, if the stars align just right, our children may opt to drink from their Right Cups instead of buying a bottle of orange juice at the cafeteria. Then we’ll see some panicked executives scrambling around at those beverages giants.



It’s still early to divine the full impact the Right Cup could have on our lives, or even whether the product is even working as well as promised. For now, we would do well to focus only on previously identified mega-trends which the product fulfills: the idea of using nano-technology to remake everyday products and imbue them with added properties, and the principle of self-reliance. In the next decade we will see more and more products based on these principles. I daresay that our children are going to be living in a pretty exciting world.


Disclaimer: I received no monetary or product compensation for writing this post.


Failures in Foresight, Part II: The Failure of the Paradigm

I often imagine myself meeting James Clark Maxwell, one of the greatest physicists in the history of the Earth, and the one indirectly responsible for almost all the machinery we’re using today – from radio to television sets and even power plants. He was recognized as a genius in his own time, and became a professor at the age of 25 years old. His research resulted in Maxwell’s Equations, which describe the connection between electric and magnetic fields. Every electronic device in existence today, and practically all the power stations transmitting electricity to billions of souls worldwide – they all owe their existence to Maxwell’s genius.

And yet when I approach that towering intellectual of the 19th century in my imagination, and try to tell him about all that has transpired in the 20th century, I find that he does not believe me. That is quite unseemly of him, seeing as he is a figment of my imagination, but when I devote some more thought to the issue, I realize that he has no reason to accept any word that I say. Why should he?

At first I decide to go cautiously with the old boy, and tell him about the X-rays – whose discovery was made in 1895, just 26 years after Maxwell’s death. “Are you talking of light that can go through the human body and chart all the bones in the way?” he asks me incredulously. “That’s impossible!”

And indeed, there is no scientific school in 1879 – Maxwell’s death date – that can support the idea of X-rays.

I decide to jump ahead and skip the theory of relativity, and instead tell him about the atom bomb that demolished Nagasaki and Hiroshima. “Are you trying to tell me that just by banging together two pieces of that chemical which you call Uranium 235, I can release enough energy to level an entire town?” he scoffs. “How gullible do you think I am?”

And once again, I find that I cannot fault him for disbelieving my claims. According to all the scientific knowledge from the 19th century, energy cannot come from nowhere. Maxwell, for all his genius, does not believe me, and could not have forecast these advancements when he was alive. Indeed, no logical forecasters from the 19th century would have made these predictions about the future, since they suffered from the Failure of the Paradigm.

Scientific Paradigms

A paradigm, according to Wikipedia, is “a distinct set of concepts or thought patterns”. In this definition one could include theories and even research methods. More to the point, a paradigm describes what can and cannot happen. It sets the boundaries of belief for us, and any forecast that falls outside of these boundaries requires the forecaster to come up with extremely strong evidence to justify it.

Up to our modern times and the advent of science, paradigms changed in a snail-like pace. People in the medieval times largely figured that their children would live and die the same way as they themselves did, as would their grandchildren and grand-grandchildren, up to the day of rapture. But then Science came, with thousands of scientists researching the movement of the planets, the workings of the human body – and the connections between the two. And as they uncovered the mysteries of the universe and the laws that govern our bodies, our planets and our minds, paradigms began to change, and the impossible became possible and plausible.

The discovery of the X-rays is just one example of an unexpected shift in paradigms. Other such shifts include –

Using nuclear energy in reactors and in bombs

Lord Rutherford – the “father of nuclear physics” in the beginning of the 20th century, often denigrated the idea that the energy existing in matter would be utilized by mankind, and yet one year after his death, the fission of the uranium nucleus was discovered.


According to the legend, the great experimental physicist Michael Faraday was paid a visit by governmental representatives back in the 19th century. Faraday showed the delegation his clunky and primitive electric motors – the first of their kind. The representatives were far from impressed, and one of them asked “what could possibly be the use for such toys?” Faraday’s answer (which is probably more urban myth than fact) was simple – “what use is a newborn baby?”

Today, our entire economy and life are based on electronics and on the power obtained from electric power plants – all of them based on Faraday’s innovations, and completely unexpected at his time.

Induced Pluripotent Stem Cells

This paradigm shift has happened just nine years ago. It was believed that biological cells, once they mature, can never ‘go back’ and become young again. Shinya Yamanaka other researchers have turned that belief on its head in 2006, by genetically engineering mature human cells back into youth, turning them into stem cells. That discovery has earned Yamanaka his 2012 Nobel prize.

Plugs everywhere. You can blame Maxwell for this one.
Plugs everywhere. You can blame Maxwell and Faraday for this one.

How Paradigms Advance

It is most illuminating to see how computers have advanced throughout the 20th century, and have constantly shifted from one paradigm to the other along the years. From 1900 to the 1930s, computers were electromechanical in nature: slow and cumbersome constructs with electric switches. As technology progressed and new scientific discoveries were made, computers progressed to using electric relay technology, and then to vacuum tubes.

Computing power increases exponentially as paradigms change. Source: Ray Kurzweil's The Singularity is Near
Computing power increases exponentially as paradigms change. Source: Ray Kurzweil’s The Singularity is Near

One of the first and best known computers based on vacuum tubes technology is the ENIAC (Electronic Numerical Integrator and Computer), which weighed 30 tons and used 200 kilowatts of electricity. It could perform 5,000 calculations a second – a task which every smartphone today exceeds without breaking a sweat… since the smartphones are based on new paradigms of transistors and integrated circuits.

At each point in time, if you were to ask most computer scientists whether computers could progress much beyond their current state of the art, the answer would’ve been negative. If the scientists and engineers working on the ENIAC were told about a smartphone, they would’ve been completely baffled. “How can you put so many vacuum tubes into one device?” they would’ve asked. “and where’s the energy to operate them all going to come from? This ‘smartphone’ idea is utter nonsense!”

And indeed, one cannot build a smartphone with vacuum tubes. The entire computing paradigm needed to change in order for this new technology to appear on the world’s stage.

The Implications

What does the Failure of the Paradigm mean? Essentially what it means is that we cannot reliably forecast a future that is distant enough for a paradigm shift to occur. Once the paradigm changes, all previous limitations and boundaries are absolved, and what happens next is up to grabs.

This insight may sound gloomy, since it makes clear that reliable forecasts are impossible to make a decade or two into the future. And yet, now that we understand our limitations we can consider ways to circumvent them. The solutions I’ll propose for the Failure of the Paradigm are not as comforting as the mythical idea that we can know the future, but if you want to be better prepared for the next paradigm, you should consider employing them.

And now - for the solutions!
And now – for the solutions!

Solutions for the Failure of the Paradigm

First Solution: Invent the New Paradigm Yourself

The first solution is quite simple: invent the new paradigm yourself, and thus be the one standing on top when the new paradigm takes hold. The only problem is, nobody is quite certain what the next paradigm is going to be. This is the reason why we see the industry giants of today – Google, Facebook, and others – buying companies left-and-right. They’re purchasing drone companies, robotics companies, A.I. companies, and any other idea that looks as if it has a chance to grow into a new and successful paradigm a decade from now. They’re spreading and diversifying their investments, since if even one of these investments leads into the new paradigm, they will be the Big Winners.

Of course, this solution can only work for you if you’re an industry giant, with enough money to spare on many futile directions. If you’re a smaller company, you might consider the second solution instead.

Second Solution: Utilize New Paradigms Quickly

The famous entrepreneur Peter Diamandis often encourages executives to invite small teams of millennials into their factories and companies, and asking them to actively come up with ideas to disrupt the current workings of the company. The millennials – people between 20 to 30 years old – are less bound by ancient paradigms than the people currently working in most companies. Instead, they are living the new paradigms of social media, internet everywhere, constant surveillance and loss of privacy, etc. They can utilize and deploy the new paradigms rapidly, in a way that makes the old paradigms seem antique and useless.

This solution, then, helps executives circumvent the Failure of the Paradigm by adapting to new paradigms as quickly as possible.

Third Solution: Forecast Often, and Read Widely

One of the rules for effective Forecasting, as noted futurist Paul Saffo wrote in Harvard Business Review in 2007, is to forecast often. The proficient forecaster needs to be constantly on the alert for new discoveries and breakthroughs in science and technology – and be prepared to suggest new forecasts accordingly.

The reason behind this rule is that new paradigms rarely (if ever) appear out of the blue. There are always telltale signs, which are called Weak Signals in foresight slang. Such weak signals can be uncovered by searching for new patents, reading Scientific American, Science and Nature to find out about new discoveries, and generally browsing through the New York Times every morning. By so doing, one can be certain to have better hunch about the oncoming of a new paradigm.

Fourth Solution: Read Science Fiction

You knew that one was coming, didn’t you? And for a good reason, too. Many science fiction novels are based on some kind of a paradigm shift occurring, that forces the world to adapt to it. Sometimes it’s the creation of the World Wide Web (which William Gibson speculated about in his science fiction works), or rockets being sent to the moon (As was the case in Jules Verne’s book – “From the Earth to the Moon”), or even dealing with cloning, genetic engineering and bringing back extinct species, as in Michael Crichton’s Jurassic Park.

Science fiction writers consider the possible paradigm shifts and analyze their consequences and implications for the world. Gibson and other science fiction writers understood that if the World Wide Web will be created, then we’ll have to deal with cyber-hackers, with cloud computing, and with mass-democratization of information. In short, they forecast the implications of the new paradigm shift.

Science fiction does not provide us with a solid forecast for the future, then, but it helps us open our minds and escape the Failure of the Paradigm by considering many potential new paradigms at the same time. While there is no research to support this claim, I truly believe that avid science fiction readers are better prepared for new paradigms than everyone else, as they’ve already lived those new paradigms in their minds.

Fifth Solution: Become a Believer

When trying to look far into the future, don’t focus on the obstacles of the present paradigm. Rather if you constantly see that similar obstacles have been overcome in the past (as happened with computers), there is a good reason to assume that the current obstacles will be defeated as well, and a new paradigm will shine through. Therefore, you have to believe that mankind will keep on finding solutions and developing new paradigms. The forecaster is forced, in short, to become a believer.

Obviously, this is one of the toughest solutions to implement for us as rational human beings. It also requires us to look carefully at each technological field in order to understand the nature of the obstacles, and how long will it take (according to the trends from the past) to come up with a new paradigm to overcome them. Once the forecaster identifies these parameters, he can be more secure in his belief that new paradigms will be discovered and established.

Sixth Solution: Beware of Experts

This is more of an admonishment than an actual solution, but is true all the same. Beware of experts! Experts are people whose knowledge was developed during the previous paradigm, or at best during the current one. They often have a hard time translating their knowledge into useful insights about the next paradigm. While they can highlight all the difficulties existing in the current paradigm, it is up to you to consider how in touch those experts are with the next potential paradigms, and whether or not to listen to their advice. That’s what Arthur C. Clarke’s first law is all about –

“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”


The Failure of the Paradigm is a daunting one, since it means we can never forecast the future as reliably as we would like to. Nonetheless, business people today can employ the above solutions to be better prepared for the next paradigm, whatever it turns out to be.

Of all the proposed solutions to the Failure of the Paradigm, I like the fourth one the best: read science fiction. It’s a cheap solution that also brings much enjoyment to one’s life. In fact, when I consult for industrial firms, I often hire science fiction writers to write stories about the possible future of the company in light of a few potential paradigms. The resulting stories are read avidly by many of the employees in the company, and in many cases show the executives just how unprepared they are for these new paradigms.

Fiverr is Broken by Design – and it Hurts Everyone

A few days ago I decided that I wanted a new business card for the up and coming new year. I headed straight to Fiverr, and browsed through some of the graphic designers who offered their services for five dollars or more. After a few minutes, my choice was made: I decided to use the designer with more than a hundred of 5-star positive ratings, and literally no negative reviews at all.

Of course, the gig didn’t really cost five dollars. I added $10 to receive the source file as well, $5 for the design of a double-sided business card, and $5 for a “more professional work”, as the designer put it. Along with other bits, the gig cost $30 altogether, which is still a good price to pay for a well-designed card.

Then the troubles began.

I received the design in 24 hours. It was, simply put, nowhere near what I expected. The fonts were all wrong, the colors were messed up, and worst of all – the key graphical element in front of the card was not centralized properly, which indicates to me a lack of attention to details that is outright unprofessional. So I asked for a modification, which was implemented within a day. It was not much better than the original. At which point I thanked the designer, and concluded the gig with a review of her work. I gave her a rating of generally three stars – possibly more than I felt that her skills warrant, and wrote a review applauding her effort to fix things, but also mentioned that I was not satisfied with the final result.

An hour later, the designer sent me a special plea. She asked me, practically in virtual tears, to remove my review, telling me that we can cancel the order and go to our separate ways. She told me that her livelihood depends on Fiverr, and without high ratings, she would not be approached by other buyers in the future.

A discussion I had with a Fiverr service provider, who begged me to give her a higher rating
A discussion I had with a Fiverr service provider, who begged me to give her a higher rating

I knew that my money would not actually be returned to me, since Fiverr only deposits the return in your Fiverr account for the next gigs you will purchase from them.But seeing a maiden so distraught, and me having an admittedly soft heart, I decided to play the gallant knight and deleted my negative review.

And so, I betrayed the community, and added to the myth of Fiverr.

Lessons for the No-Managers Workplace

In December 2011, the management guru Gary Hamel published an intriguing piece in the Harvard Business Review called “First, Let’s Fire All the Managers”. In the article, Hamel described a wildly successful company – The Morning Star Company – based on a model that makes managers unnecessary. The workers regulate themselves, criticize each other’s work, and deliberate together on the course of action their department should take. Simply put, everyone is a manager in Morning Star, and no one is.

You should read the article if this interests you (and it should), but just to sum up – Morning Star has some 400 workers, so it’s not a small start-up, and the model it’s using could definitely be scaled-up for much larger companies. However, Hamel included a few admonishments, the first of which was the need for accountability: the employees in Morning Star must “deliver a strong message to colleagues who don’t meet expectations,” wrote Hamel. Otherwise, “self-management can become a conspiracy of mediocrity.”

The Morning Star company - a workplace without managers. Source: The Los Banos Tomato Festival
The Morning Star company – a workplace without managers.
Source: The Los Banos Tomato Festival

The employees in Morning Star receive special training to make sure they understand how important it is that they provide criticism and feedback to other employees, and that they actually hurt all the other employees if such feedback is not provided and made public. Apparently the training works, since Morning Star has been steadily growing over the past few decades, while leaving its competitors far behind. In fact, today “Morning Star is the world’s largest tomato processor, handling between 25% and 30% of the tomatoes processed each year in the United States.”

Morning Star is a shining example for a no-managers workplace which actually works in a competitive market, since each person in the firm makes sure that others are doing their jobs properly.

But what happens in Fiverr?

Is Fiverr Broken?

I have no idea how many service providers on Fiverr beg their customers for high ratings. I have a feeling that it happens much more frequently than it should, and that soft-hearted customers like me (and probably you too) can become at least somewhat swayed by such passionate requests. The result is that some service providers on Fiverr will enjoy a much higher rating than they deserve – which will in effect deceive all their future potential customers.

Fiverr could easily take care of this issue, by banning such requests for high rating, and setting an algorithm that screens all the messages between the client and the service provider to identify such requests. But why should Fiverr do that? Fiverr profits from having the seemingly best designers on the web, with an average of a five stars rating! Moreover, even in cases where the customer is extremely ticked off, all that will happen is that the service provider won’t get paid. Fiverr keeps the actual money, and only provides recompensation by virtual currency that stays in the Fiverr system. This is a system, in short, in which nobody is happy, except for Fiverr: the customer loses money and time, and the service provider loses money occasionally and gets no incentive or real feedback that will make him or her improve in the long run.


As I wrote earlier, Fiverr could easily handle this issue. Since they do not, I rather suspect they like the way things work right now. However, I believe that sooner or later they will find out that they have garnered themselves a bad reputation, which will keep future customers away from their site. We know that great start-ups that have received a large amount of funding and hype, like Quirky, have toppled before because of inherent problems in their structures. I hope Fiverr would not fail in a similar fashion, simply because it doesn’t bother to winnow the bad apples from its orchard.

Pace Layer Thinking and the Lucky Iron Fish

When Achariya, an ordinary woman from Cambodia got pregnant, she was scared out of her wits. Pregnancy can become a death sentence for women in developing countries, with every year more than half a million mothers dying during pregnancy or child birth. In Cambodia specifically, “maternity-related complications are one of the leading causes of death among women ages 15 to 49”, according to the Population Reference Bureau. Out of every 100,000 women delivering a baby, 265 Cambodian mothers do not make it out of the birth room alive. In comparison, in developed countries like Italy, Australia and Israel, only 4–6 mothers out of 100,000 perish during childbirth.

While there are many different reasons for the abundance in maternal mortality, a prominent one is chronic conditions like anemia caused by iron deficiency in food. Dietary iron deficiency affects about 60% of pregnant Cambodian women, and results in premature labor, and hemorrhages during childbirth.

There is good evidence that iron can leech out of cast-iron cookware, such tools can be too expensive for the average Cambodian family. But in 2008 Christopher Charles, a student from the University of Guelph had a great idea: he and his team distributed iron discs to women in a Cambodian village, asking them to add it to the pot when making soup or boiling water for rice. The iron was supposed to leech from the ingot and into the food in theory. In practice, the women took the iron nuggets, and immediately used them as doorstops, which did not prove as beneficial to their health.

Charles did not let that failure deter him. He realized he needed to find a way to make the women use the iron ingot, and after a conversation with the village elders a solution was found. He recast the iron in the form of a smiling fish – a good luck charm in Cambodian culture. The newly-shaped fish enjoyed newfound success as women in the village began putting it in their dishes, and anemia rate in the village decreased by 43% within 12 months. Today, Charles and his company are upscaling operations, and during 2014 alone have supplied more than 11,000 iron fish to families in Cambodia.

The Lucky Iron Fish in a gift package.  Source: Wikipedia, by Dflock
The Lucky Iron Fish in a gift package.
Source: Wikipedia, by Dflock

Pace Layer Thinking

For me, the main lesson from the iron fish experiment is that new technology cannot be measured and analyzed without considering the way in which society and current culture will accept it. While this principle sounds obvious, many entrepreneurs overlook it, and find themselves struggling against societal forces out of their control, instead of adapting their inventions so that they be easily accepted by society.

We have here, in essence, a very clear demonstration of the Pace Layering model developed and published by Stewart Brand back in 1999. Brand distinguishes between six different layers which describe society, each of which develops and changes at a pace of its own. Those layers are, in order from the ones that change most rapidly, to the ones that are nearly immovable:

  • Fashion
  • Commerce
  • Infrastructure
  • Governance
  • Culture
  • Nature
Pace Layer Thinking model. Source: The Clock of the Long Now
Pace Layer Thinking model.
Source: The Clock of the Long Now

The upper layers are moving forward more rapidly than the lower ones. They are the Uber and Airbnb (commerce layer) that stand in conflict with the Government’s regulations (governance layer). They are the ear extenders (fashion layer) that stand in conflict with the unwritten prohibition to significantly alter one’s body in Western civilization (culture layer). And sometimes they are even revolutionary governmental models used to control the population, as did the communist regimes in USSR which conflict with the very biological nature of the human beings put in control over such countries (governance layer vs. nature layer).

As you can see in the following slide (originally from Brand’s lecture at The Interval), the upper layers are not only the faster ones, but they are discontinuous – meaning that they evolve rapidly and jump forward all the time. Unsurprisingly, these layers are where innovations and revolutions occur, and as a result – they get all the attention.

The lower layers are the continuous ones. Consider culture, for example. It is impressively (and frustratingly) difficult to bring changes into a cultural item like religion. It takes decades – and sometimes thousands of years – to make lasting changes in religion. Once such changes occur, however, they can remain present for similar vast periods of time. And some would say that religion and Culture are blindingly fast when compared to the Nature layer, which is almost impossible to change in the lifetime of the individual.

You can easily argue that the Pace Layer Model is flawed, or missing some parts. Evolutionary psychologists, for example, believe that our psychology is a result of our genetics – and thus would probably put some aspects of Culture, Commerce, Governance and even Fashion at the Nature level. Synthetic biologists would say that today we can play with Nature as we wish, and as a result the Nature level should be jumpstarted to an upper level. It could even be said that companies like Uber (Commerce level) are turning out to have more power than governments (Governance level). Regardless, the model provides us with a good standing point to start with, when we try to think of the present and the future.

What does the Pace Layer Model have to do with the smiling luck fish? Everything and nothing. While I don’t know whether Charles has known of the model, a similar solution could’ve been reached by considering the problem in a Pace Layer thinking style. Charles’ problem, in essence, revolved around creating a new Fashion. He had a hard time doing that without resorting to a lower level – the Culture level – and reshaping his idea in ways that would fit the existing culture.

Pace Thinking about the Israel-Palestine Conflict

We can use Pace Layer thinking to consider other problems and challenges in modern times. It’s particularly interesting for me to analyze about the Israel-Palestine ongoing conflict, from a layer-based point of view.

There is currently a wave of terrorist attacks in Israel, enacted by both Palestinians and Israeli-Arabs from East Jerusalem. I would put this present outbreak at the Fashion level: it’s happening rapidly, it’s contagious (more terrorists are making attempts every day), and it’s drawing all of our attention to it. In short, it’s a crisis which we should ignore when trying to get a better long-term view of the overall problem.

What are the other layers we could work with, in regards to the conflict? There is the Commerce layer, representing the trade happening between Israel and the Palestinian Authority. If we want to lessen the frequency of crises like the current one, we should probably find ways to increase trade between the two parties. We could also consider the Infrastructure and Governance layers, thinking about shared cities, buildings or other infrastructures.

Last but not least – and probably most importantly – we need to consider the Culture layer. There is no denying that some aspects of the conflict revolve around the religions and other cultural habituations of each side. When a young Israeli-Arab gets up from bed in the morning, feels repressed and decides to murder a Jewish citizen, we need to ask ourselves why the culture around him hadn’t encouraged him to turn to other means of expressing his anger, like writing a column in the paper, or getting into politics. So the culture must change – and we need to find ways to bring forth such a change.

Obviously, these preliminary ideas and thoughts are merely starting points for a deeper analysis of the problem, but they serve to highlight the fact that every problem and every conflict can be analyzed in several different layers, none of which should be ignored, and that the best solutions should take into consideration several different layers.


The Pace Layer model of thinking can be a powerful tool in the analysis of every challenge, and could be used in many different cases. We’ll probably use it in the future in other articles on this blog, to analyze different situations and crises and examine the deeper layers that exist under the most fashionable and rapid ones.

In the meantime, I dare you to use the Pace Layer model to consider problems of your own – whether they’re of the national kind or entrepreneurial in nature – and report in the comments section what you’ve found out.

Failures in Foresight: The Failure of Nerve

Picture from Wikipedia, uploaded by the user Yerevanci

Today I would like to talk (write?) about the first of several different failures in foresight. This first failure – called the Failure of Nerve – had been identified in 1962 by noted futurist and science fiction titan Sir Arthur C. Clarke. While Clarke has mostly pinpointed this failure as a preface for his book about the future, I’ve identified several forces leading to the Failure of Nerve, and discuss ways to circumvent it, in the hope that the astute reader will avoid making similar failures when thinking about the future.

Failure of Nerve

The Failure of Nerve is one of the most frequent of failures when talking or writing about the future, at least in my personal experience. When experts or even laypeople are expressing an opinion about the future, you expect them to be knowledgeable enough to be aware of the facts and the data from the present. And yet, all too often, this expectation is smashed on the hard-rock of mankind’s arrogance. The Failure of Nerve occurs when people are too fearful of looking for answers in the data that surrounds them, and instead focus on repeating their preconceived notions – which might’ve been true in the past, but are no longer relevant in the present.

Examples for Failures of Nerve are sadly abundant. Many quote Simon Newcomb, the famous American astronomer, who declared that flying machines are essentially impossible, a mere two years before the first flight of the Wright brothers –

“The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”

However, this is not a Failure of Nerve, since in Newcomb’s time, the data from the scientific labs themselves was incorrect. As the Wright brothers wrote about their experiments –

“Having set out with absolute faith in the existing scientific data, we were driven to doubt one thing after another, till finally, after two years of experiment, we cast it all aside, and decided to rely entirely upon our own investigations.”

Newcomb’s Failure of Nerve appeared later on, when he was confronted with reports about the Wright brothers’ success. Instead of withholding judgement and checking the data again, Newcomb only conceded that flying machines may have a slight chance of existing, but they could certainly not carry any other human beings other than the pilot.

The first flight of the wright brothers - against the better judgement of the scientific experts of the time. Source: Wikipedia
The first flight of the wright brothers – against the better judgement of the scientific experts of the time.
Source: Wikipedia

A similar Failure of Nerve can be found in the words of Napoleon Bonaparte from the year 1800, uttered in reply to news regarding Robert Fulton’s steamboat –

“What, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense.”

Had the uprising emperor bothered to take a better look at the current state of steamboats, he would’ve learned that boats with “bonfires under their decks” were already carrying passengers in the United States, even though the venture was not a commercial success. Fulton went on to construct a steamboat (nicknamed “Fulton’s Folly”) that rose to fame, and in 1816 France finally recovered its senses and purchased a steamboat from Great Britain. Knowing of Napoleon’s genius in warfare, it is an interesting thought exercise to consider how history might have changed had the emperor realized the potential in steamboats when the technology was still emergent.

Is it possible that steamboats like this one would've changed the course of history, had Napoleon not been affected by the Failure of Nerve? Source: Wikipedia
Is it possible that steamboats like this one would’ve changed the course of history, had Napoleon not been affected by the Failure of Nerve?
Source: Wikipedia

How do we deal with a Failure of Nerve? To find the answer to that question, we need to understand the forces that make this failure so common.

Behind the Curtains of the Nerve

There are at least three different forces that can contribute to a Failure of Nerve. These are: selective exposure to information, confirmation bias, and last but definitely not least – the conservation of reputation.

The Force of Selective Exposure

Selective exposure to information is something we all suffer from. In this day and age, we have an abundance of information. In the past, news would’ve had taken weeks and months to get to us, and we only had the village elder’s opinion to interpret them for us. Today we’re flooded by information from multiple media sources, each of which with its own not-so-secret agenda. We’re also exposed to columns by social critics and other luminaries, and we can usually tell in advance how they look at things. If you read Tom Friedman’s column, you can be sure he’ll give you the leftist approach. If you open the TV at The Glenn Beck Program, on the other hand, you’ll get the right-winged view.

An abundance of information is all good and well, until you realize that human beings today suffer from a scarcity in attention. They can only focus on one article at a time, and as a result they must choose how to divide their time between competing pieces of information. The easiest choice? Obviously, to go with the news that support your current view on life. And that is indeed the way that many people choose – and understandably results in a Failure of Nerve. How can you be aware of any new information that stands in contradiction to your core beliefs, if you only listen to the people who repeat those same core beliefs?

Philip E. Tetlock, in his new book Superforcasting, tells about Doug Lorch, one of the top forecasters discovered in recent years, who has found a way to circumvent selective exposure, albeit at an effort. In the words of Tetlock (p. 126) –

“Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources – from the New York Times to obscure blogs – that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. … Doug is not merely open-minded. He is actively open-minded.”

Of course, reading opposite views to the one you adhere to can be annoying and vexing, to say the least. And yet, there is no other way to form a more nuanced and solid view of the future.

Superforecasting: The Art and Science of Prediction. By Philip E. Tetlock and Dan Gardner
Superforecasting: The Art and Science of Prediction. By Philip E. Tetlock and Dan Gardner

The Force of Confirmation Bias

Sadly, even when a person chooses to actively open his or her mind to different views, it does not mean that they will be able to assimilate the lessons into their outlook. As human beings, one is wired to –

“…search for, interpret, prefer, and recall information in a way that confirms one’s beliefs or hypotheses while giving disproportionately less attention to information that contradicts it.” – Wikipedia

The confirmation bias is well-known to any expecting future-parent. You walk around in the city, and you find that the street is choke-full of parents with strollers and babies. They are everywhere. You can’t avoid them in the streets, on the bus, and even at work you find that your co-worker had decided to bring her children to the workplace today. So what happened? Has the world’s birth rate doubled itself all of a sudden?

The obvious answer is that we are constantly influenced by confirmation bias. If our mind is constantly thinking about babies, then we’ll pay more attention to any dripping toddler crossing the road, and the memory will be etched much more firmly into our minds.

The confirmation bias does not influence only young parents. It has some real importance in the way we view our world. A study from 2009 demonstrated that when people were asked to read certain articles spend 36 percent more time, on average, reading articles that they agreed with. Another study from 2009 demonstrated that when conservatives are watching The Colbert Report – in which Stephen Colbert satirizes the part of a right-winged news reporter – they read extra-meaning into his words. They claimed that Colbert only pretends to be joking, and actually means what he says on the show.

How does confirmation bias relate to the Failure of Nerve? In a way, it serves to negate some of the bad reputation that the Failure of Nerve has garnered from Clarke. The confirmation bias basically means that unless we make a truly tremendous and conscious attempt to analyze the world around us, our mind will fool us. We’ll pay less attention to evidence that refutes our current outlook, and consider it of lesser importance than other pieces of evidence. Or as the pioneer of the scientific method, Francis Bacon, put it (I found this great quote in a highly recommended blog: You Are Not So Smart) –

“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it.”

Can we fight off the influence of the confirmation bias over our thinking process? We can do that partially, but never completely and it will never be easy. Warren Buffett (third on the list of Forbes’ richest people in the world, and one of the most successful investors in the world) uses two means to tackle the confirmation bias: he specifically looks for dissenters and invites them to speak up, and (assumedly) he’s writing down promptly any piece of evidence that stands in contradiction to his current ideas. In the words of Buffet himself (quoted in TheDataPoint) –

“Charles Darwin used to say that whenever he ran into something that contradicted a conclusion he cherished, he was obliged to write the new finding down within 30 minutes. Otherwise his mind would work to reject the discordant information, much as the body rejects transplants.”

In short, in order to minimize the impact of confirmation bias, you need to remain constantly vigilant against the tendency to be certain of yourself. You must chase after those who disagree with you and seek their opinions actively, and perhaps most importantly: you should write it all down in order to distance yourself from your original perspective, and allow yourself to judge your thinking as though it were someone else’.

The Conservation of Reputation

One of the best known laws in the physical world is the Conservation of Mass. Only slightly less well-known is the law of Conservation of Reputation, which states that the average expert always takes the best of care not to lose face or reputation in his or her dealings with the media. Upton Sinclair had summed up the this law nicely when he wrote –

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Sadly enough, most experts believe that revisions of past forecasts, or indeed any change of opinion at all, will diminish and tarnish their reputation. And so, we can meet experts who will deny reality even when they meet it face-to-face. Some of them are probably blinded by their own big ideas and egos. Others probably choose to conserve what’s left of their reputation and dignity at any cost, even as they see their forecasts shrivel and wither in the light of the present.

The story of Larry Kudlow is particularly prominent in this regard. Kudlow forecast that President George W. Bush’s substantial tax cuts will result in an economic boom. The forecast fell flat, and the economy did not progress as well as it did during President Clinton’s reign. Kudlow did not seem to notice, and declared that the “Bush Rush” is here already. In fact, in 2008 he proclaimed that the current progress of American economy “may be the greatest story never told”. Five months later, Lehman Brothers filed for bankruptcy, and the entire global financial system collapsed along with that of the U.S.

I am going to assume that Kudlow was truly sincere in his proclamations, but obviously many other experts will not feel the need to be as honest, and will adhere to their past proclamations and declarations come hell or high water. And if we’re totally honest, then it must be said that the public encourages such behavior. In January 2009, The Kudlow Report (starring none other than Kudlow himself) began airing on CNBS. Indeed, sticking to your guns even in the face of reality seems to be one of the most important lessons for experts who wish to come up with the upper hand in the present – and assume correctly that few if any would force them to come to terms with their forecasts from the past.


In this text, the first of several, I’ve covered the Failure of Nerve in foresight and forecasting. The Failure of Nerve was originally identified by Arthur C. Clarke, but I’ve tried to make use of our current understanding of behavioral psychology to add more depth and to identify ways for people to overcome this all-too-common failure. Another book which has been very helpful in this endeavor was the recently published Superforecasting by Philip E. Tetlock and Dan Gardner, which you should definitely read if you’re interested in the art and science of forecasting.

There are obviously several other failures in foresight, which I will cover in future articles on the subject.