When I first read about the invention of the Right Cup, it seemed to me like magic. You fill the cup with water, raise it to your mouth to take a sip – and immediately discover that the water has turned into orange juice. At least, that’s what your senses tell you, and the Isaac Lavi, Right Cup’s inventor, seems to be a master at fooling the senses.
Lavi got the idea for the Right Cup some years ago, when he was diagnoses with diabetes at the age of 30. His new condition meant that he had to let go of all sugary beverages, and was forced to drink only plain water. As an expert in the field of scent marketing, however, Lavi thought up of a new solution to the problem: adding scent molecules to the cup itself, which will trick your nose and brain into thinking that you’re actually drinking fruit-flavored water instead of plain water. This new invention can now be purchased on Indiegogo, and hopefully it even works.
“My two diabetic parents are drinking from this cup for the last year and a half.” Lavi told me in an e-meeting we had last week, “and I saw that in taste testing in preschool, kids drank from these cups and then asked for more ‘orange juice’. And I told myself that – Wow, it works!”
What does the Right Cup mean for the future?
A Future of Nano-technology
First and foremost, the Right Cup is one result of all the massive investments in nano-technology research made in the last fifteen years.
“Between 2001 and 2013, the U.S. federal government funneled nearly $18 billion into nanotechnology research… [and] The Obama administration requested an additional $1.7 billion for 2014.” Writes Martin Ford in his 2015 book Rise of the Robots. These billions of dollars produced, among other results, new understandings about the release of micro- and nano-particles from polymers, and the ways in which molecules in general react with the receptors in our noses. In short, they enabled the creation of the Right Cup.
There’s a good lesson to be learned here. When our leaders justified their investments in nano-technology, they talked to us about the eradication of cancer via drug delivery mechanisms, or about bridges held by cobwebs of carbon nanotubes. Some of these ideas will be fulfilled, for sure, but before that happens we might all find ourselves enjoying the more mundane benefits of drinking Illusory orange-flavored water. We can never tell exactly where the future will lead us: we can invest in the technology, but eventually innovators and entrepreneurs will take those innovations and put them to unexpected uses.
All the same, if I had to guess I would imagine many other uses for similar ‘Right Cups’. Kids in Africa could use cups or even straws which deliver tastes, smells and even more importantly – therapeutics – directly to their lungs. Consider, for example, a ‘vaccination cup’ that delivers certain antigens to the lungs and thereby creates an immune reaction that could last for years. This idea brings back to mind the Lucky Iron Fish we discussed in a previous post, and shows how small inventions like this one can make a big difference in people’s lives and health.
A Future of Self-Reliance
It is already clear that we are rushing headlong into a future of rapid manufacturing, in which people can enjoy services and production processes in their households that were reserved for large factories and offices in the past. We can all make copies of documents today with our printer/scanner instead of going to the store, and can print pictures instead of waiting for them to be developed at a specialized venue. In short, technology is helping us be more geographically self-reliant – we don’t have to travel anymore to enjoy many services, as long as we are connected to the digital world through the internet. The internet provides information, and end-user devices produce the physical result. This trend will only progress further as 3D printers become more widespread in households.
The Right Cup is another example for a future of self-reliance. Instead of going to the supermarket and purchasing orange juice, you can buy the cup just once and it will provide you with flavored water for the next 6-9 months. But why stop here?
Take the Right Cup of a few years ahead and connect it to the internet, and you have the new big product: a programmable cup. This cup will have a cartridge of dozens of scent molecules, each of which can be released at different paces, and in combination with the other scents. You don’t like orange-flavored water? No problem. Just connect the cup to the World Wide Web and download the new set of instructions that will cause the cup to release a different combination of scents so that your water now tastes like cinnamon flavored apple cider, or any other combinations of tastes you can think of – including some that don’t exist today.
A Future of Disruption?
As with any innovation and product proposed on crowdfunding platforms, it’s difficult to know whether the Right Cup will stand up to its hype. As of now the project has received more than $100,000 – more than 200% of the goal they put up. Should the Right Cup prove itself taste-wise, it could become an alternative to many light beverages – particularly if it’s cheap and long-lasting enough.
Personally, I don’t see Coca-Cola, Pepsi and orchard owners going into panic anytime soon, and neither does Lavi, who believes that the beverage industry is “much too large and has too many advertising resources for us to compete with them in the initial stages.” All the same, if the stars align just right, our children may opt to drink from their Right Cups instead of buying a bottle of orange juice at the cafeteria. Then we’ll see some panicked executives scrambling around at those beverages giants.
It’s still early to divine the full impact the Right Cup could have on our lives, or even whether the product is even working as well as promised. For now, we would do well to focus only on previously identified mega-trends which the product fulfills: the idea of using nano-technology to remake everyday products and imbue them with added properties, and the principle of self-reliance. In the next decade we will see more and more products based on these principles. I daresay that our children are going to be living in a pretty exciting world.
Disclaimer: I received no monetary or product compensation for writing this post.
I gave a lecture in front of the Jewish Alliance of Greater Rhode Island, which is a lot like the Justice League, but Jewish. I was telling them about all the ways in which the world is becoming a better place, and all the reasons for these trends to go on into the future. There are plenty of reasons for optimism: more people are literate than ever before; the number of people suffering from extreme poverty is rapidly declining and is about to fall below 10% for the first time ever in human history; and the exponential progress in solar energy could ensure that decontamination and desalination devices could operate everywhere, overcoming the water crisis that many believe looms ahead.
After the lecture was done I opened the stage for questions. The first one was short and to the point: “What about terrorists?”
It does look like nowadays, following the attacks on Paris, terrorists are on everybody’s mind. However, it must be said that while attacks against civilians are deplorable, terrorists have generally had very little success with those. The September 11 Attacks carried the worst death toll of all terrorist attacks in recent history, in which just 19 plane hijackers killed 2,977 people. While terrorism may yet progress to using chemical and biological warfare, so far it is relatively harmless when you only calculate the cost in lives, and mostly affects the morale of the people.
I would say the question that’s really bothering people is whether terrorists can eventually deal a debilitating deathblow to Western culture, or at the very least create a disturbance severe enough to make that culture go into rapid decline. And that raises an interesting question: can we find a way to conserve our culture, our values and our monuments for good?
I believe we have already found a way to do that, and Wikipedia is a shining example.
Creative Destruction and Wikipedia
Spot the Dog is a series of children’s books about the adventures of Spot (the dog). In July 3, 2012, the Wikipedia entry for Spot the Dog was changed to acknowledge that the author of the series was, in fact, no other than Ernest Hemingway under the pseudonym Eric Hill. In the revised Wikipedia entry the readers learned about “Spot, a young golden retriever who struggles with alcoholism and a shattered sense of masculinity.”
Needless to say, this was a hoax. Spot is obviously a St. Bernard puppy, and not a “young golden retriever”.
What’s interesting is that within ten minutes of the hoax’ perpetration, it was removed and the original article was published as if nothing wrong had ever happened. That is not surprising to us, since we’ve gotten used to the fact that Wikipedia keeps backups of every article and of every revision ever made to it. If something goes wrong – the editors just pull up the latest version before the incident.
A system of this kind can only exist in the virtual world, because of a unique phenomenon: due to the exponential growth in computing capabilities and data storage, bits now cost less than atoms. The cost for keeping a virtual copy of every book ever written is vastly lower than keeping such copies on paper in the ‘real’ world – i.e. our physical reality.
The result is that Wikipedia is invulnerable to destruction and virtual terrorism as long as there are people who care enough to restore it to its previous state, and that the data can be distributed easily between people and computers instead of remaining in one centralized data-bank. The virtualization and distribution of the data has essentially immortalized it.
Can we immortalize objects in the physical world as well?
Immortalization via Virtualization
In February 27, 2015, Islamic State militants brought sledgehammers into the Mosul museum, and have carefully and thoroughly shattered an unknown number of ancient statues and artefacts from the Assyrian era. In effect, the terrorists have committed a crime of cultural murder. It is probable that several of the artefacts destroyed in this manner have no virtual representation yet, and are thus gone forever. They are, in a very real sense of the word, dead.
Preventing such a tragedy from ever occurring again is entirely within our capabilities. We simply need to obtain high-resolution scans of every artefact in every museum. Such a venture would certainly come at a steep cost – quite possibly more than a billion dollars – but is that such a high price to pay for immortalizing the past?
These kinds of ventures have already begun sprouting up around the world. The Smithsonian is scanning artefacts and even entire prehistoric caves, and are distributing those scans among history enthusiasts around the world. What better way to ensure that these creations will last forever? Similarly, Google is adding hundreds of 3D models of art pieces to its Google Art Project Initiative. That’s a very good start to a longer-term process, and if things keep making progress this way, we will probably immortalize most of the world’s artefacts within a decade, and major architectural monuments will follow soon after. Indeed, one could well say that Google’s Street View project is preserving our cities for eternity.
(If you want to see the immortal model of an ancient art piece, just click on the next link – )
Architecture and history, then, are rapidly gaining invulnerability. The terrorists of the present have a ‘grace period’ to destroy some more pieces of art, but as go forward into the future, most of that art will be preserved in the virtual world, to be viewed by all – and also to be recreated as needed.
So we’ll save (pun fully intended) our history and culture, but what about ourselves? Can we create virtual manifestations of our human selves in the digital world?
That might actually be possible in the foreseeable future.
Eternime – The Eternal Me
Eternime is just one of several highly ambitious companies and projects who try to create a virtual manifestation of an individual: you, me, or anybody else. The entrepreneurs behind this start-up have leaped into fame in 2014 when they announced their plans to create intelligent avatars for every person. By going over the abundance of information we’re leaving in our social networks, and by receiving as input answers to many different questions about a certain individual’s life, those avatars would be able to answer questions just as if they were that same individual.
Efforts for the virtualization of the self are also taking place in the academy, as was demonstrated in a new initiative: New Dimensions in Testimony, opened in the University of South California and led by Bill Swartout, David Traum, and Paul Debevec. In the project, interviews with holocaust survivors are recorded and separated into hundreds of different answers, which the avatar then provides when asked.
I think the creators of both projects will agree that they are still in very early phases, and that nobody will mistake the avatars for accurate recreations of the original individuals they were based on. However, as they say, “It’s a good start”. As data storage, computing capabilities and recording devices continue to grow exponentially, we can expect more and more virtualization of individuals to take place, so that their memories and even personalities are kept online for a very long time. If we take care to distribute these virtual personalities around the world, they will be virtually immune to almost all terrorism acts, except for the largest ones possible.
In recent decades we’ve started creating virtual manifestations of information, objects and even human beings, and distributed them throughout the world. Highly distributed virtual elements are exceedingly difficult to destroy or corrupt, as long as there’s a community that acknowledges their worth, and thus can be conserved for an extremely long time. While the original physical objects are extremely vulnerable to terrorist attacks, their virtual manifestations are generally immune to any wrongdoing.
So what should we do to protect our culture from terrorism? Virtualize it all. 3D Scan every monument and every statue, every delicate porcelain cup and every ancient book in high resolution, and upload it all to the internet, where it can be shared freely between the people of the world. The physical monuments can and will be destroyed at some point in the future. The virtual ones will carry on.
Today is World Kindness Day and that serves as a wonderful starting point for a discussion of where kindness is heading to, and why we’re heading towards a World of Karma: a world filled with infinite kindness – and almost none at all.
In order to understand the future of kindness, we must first take a look at two parallel trends occurring nowadays, and analyze their impact in relation to each other. These trends are the growing Omni-connectivity, and Cognitive Computing. Let’s go quickly over each, and see how they culminate together in a world of infinite kindness.
Omni-connectivity is my definition for a world in which everyone is connected, and everything is known. This world will be brought about by the growing Internet of Things, which connects between every-‘thing’. Every item, every object. From the floor under your feet that counts how many people have walker over it today, to the cement brick in the nearby bridge that senses when the structure is about to fail. Your mirror is also connected to the internet, as is your toothbrush and your comb. And yes, your clothes are all sending data about what’s happening to you every minute and every second of the day.
The Internet of Things is becoming a reality because of the incredible leaps forward in technology. The cost of sensors has gone down by 40% over the last ten years, while the costs of bandwidth and processing have gone down by 4,000% and 6,000% respectively over the same period of time. By the year 2020 – just five years ahead – there are expected to be 28 billion ‘things’ connected to the internet, and the number is only expected to grow larger after that.
In the Internet of Things era, everything that happens in the physical world is recorded, uploaded to the cloud as data and analyzed by computers. Very few people, if any, can remain invisible or under the shroud of anonymity. Everything you do is analyzed, quantified and catalogued in vast databases.
The other piece of the puzzle is the growing capability of cognitive computing. Today, we are beginning to teach computers by training them: we’re showing them images of many cats so that they can gain an understanding for what a cat is, for example. Many of these computers are based on the workings of the human brain, making use of artificial neural networks, so it should come as no surprise that they can be taught basic concepts, imageries and even to play games just by seeing human beings playing them.
Computers are already better than human beings at image recognition, and it won’t take them long (probably less than two decades) to be just as proficient as human beings are at analyzing video clips as well. The computers of the near future won’t simply look for cats in videos, but instead they will focus on human emotions: are the people in the video clip happy? Are they sad, or just frustrated?
And this will just be the beginning.
Recall that in the omni-connected world, we are all monitored all the time by wearable computers that are listening to our heart beats, recording our temperature and voices, measuring our activity and the food that we eat, and everything in between. All of this data is uploaded to the cognitive computers who can determine what’s going on with our lives at any distinct moment. They can know where we are, what we do, and even how we feel about it: whether we are sad, sexually aroused, or anything in-between.
The World of Karma
We already have apps today that try to quantify kindness and good deeds, and pay you back for them. The only problem is that they’re extremely reliant on the jurisdiction of human beings, and that most people feel very uncomfortable asking for someone else to rate that kind deed they did. Or they can just lie and report from their bed that they just saved the world three times over. These challenges are difficult to overcome without some kind of a ‘god’ overlooking us all and quantifying our actions.
Now let’s call this ‘god’ an omni-computer, in an omni-connected world.
In an omni-connected world which is constantly analyzed by computers, the doings of every individual are recorded and compared to the direct impact they have on other the lives of everyone else. If I stop to help someone whose car is stuck at the side of the road, the omni-computer up above knows I performed a good deed because of the beneficial effect it had on the physiology of that poor guy I helped. And when I insult someone on the street, it similarly knows I just acted negatively. Now we only have to program the algorithms that will make the omni-computer repay my kindness.
In the world of the future, then, whenever I stop to give a helping hand to a stranger, I can know for certain that my deed is recorded, and that I will receive some kind of help in the near future in return. I may walk in the market place, feel hot and sweaty, and immediately get handed a cool beverage by a passerby. Or maybe, if I gather enough ‘kindness coins’, I can even receive larger gifts and more substantial aid from strangers. I call it The World of Karma, for obvious reasons.
While I realize such a future world sounds quite weird to us, that doesn’t mean it’s impossible or even improbable. It’s just weird – which makes it a suitable contender as one of the futures that may become real, since a future that looks just like the present is almost certainly a comforting lie that we tell ourselves.
I would like to leave you with one last thought. In the World of Karma, people’s kindness can be an enormous force for good, since everyone knows that every act of kindness is recorded, counted and will aid them in return at some point in the future. And yet, is this true kindness? Altruistic kindness, after all, is based on the idea that people help each other without expecting a return. Can such altruistic kindness exist in a world where every deed has a value, and “no good deed goes unpaid”?
I often imagine myself meeting James Clark Maxwell, one of the greatest physicists in the history of the Earth, and the one indirectly responsible for almost all the machinery we’re using today – from radio to television sets and even power plants. He was recognized as a genius in his own time, and became a professor at the age of 25 years old. His research resulted in Maxwell’s Equations, which describe the connection between electric and magnetic fields. Every electronic device in existence today, and practically all the power stations transmitting electricity to billions of souls worldwide – they all owe their existence to Maxwell’s genius.
And yet when I approach that towering intellectual of the 19th century in my imagination, and try to tell him about all that has transpired in the 20th century, I find that he does not believe me. That is quite unseemly of him, seeing as he is a figment of my imagination, but when I devote some more thought to the issue, I realize that he has no reason to accept any word that I say. Why should he?
At first I decide to go cautiously with the old boy, and tell him about the X-rays – whose discovery was made in 1895, just 26 years after Maxwell’s death. “Are you talking of light that can go through the human body and chart all the bones in the way?” he asks me incredulously. “That’s impossible!”
And indeed, there is no scientific school in 1879 – Maxwell’s death date – that can support the idea of X-rays.
I decide to jump ahead and skip the theory of relativity, and instead tell him about the atom bomb that demolished Nagasaki and Hiroshima. “Are you trying to tell me that just by banging together two pieces of that chemical which you call Uranium 235, I can release enough energy to level an entire town?” he scoffs. “How gullible do you think I am?”
And once again, I find that I cannot fault him for disbelieving my claims. According to all the scientific knowledge from the 19th century, energy cannot come from nowhere. Maxwell, for all his genius, does not believe me, and could not have forecast these advancements when he was alive. Indeed, no logical forecasters from the 19th century would have made these predictions about the future, since they suffered from the Failure of the Paradigm.
A paradigm, according to Wikipedia, is “a distinct set of concepts or thought patterns”. In this definition one could include theories and even research methods. More to the point, a paradigm describes what can and cannot happen. It sets the boundaries of belief for us, and any forecast that falls outside of these boundaries requires the forecaster to come up with extremely strong evidence to justify it.
Up to our modern times and the advent of science, paradigms changed in a snail-like pace. People in the medieval times largely figured that their children would live and die the same way as they themselves did, as would their grandchildren and grand-grandchildren, up to the day of rapture. But then Science came, with thousands of scientists researching the movement of the planets, the workings of the human body – and the connections between the two. And as they uncovered the mysteries of the universe and the laws that govern our bodies, our planets and our minds, paradigms began to change, and the impossible became possible and plausible.
The discovery of the X-rays is just one example of an unexpected shift in paradigms. Other such shifts include –
Using nuclear energy in reactors and in bombs
Lord Rutherford – the “father of nuclear physics” in the beginning of the 20th century, often denigrated the idea that the energy existing in matter would be utilized by mankind, and yet one year after his death, the fission of the uranium nucleus was discovered.
According to the legend, the great experimental physicist Michael Faraday was paid a visit by governmental representatives back in the 19th century. Faraday showed the delegation his clunky and primitive electric motors – the first of their kind. The representatives were far from impressed, and one of them asked “what could possibly be the use for such toys?” Faraday’s answer (which is probably more urban myth than fact) was simple – “what use is a newborn baby?”
Today, our entire economy and life are based on electronics and on the power obtained from electric power plants – all of them based on Faraday’s innovations, and completely unexpected at his time.
Induced Pluripotent Stem Cells
This paradigm shift has happened just nine years ago. It was believed that biological cells, once they mature, can never ‘go back’ and become young again. Shinya Yamanaka other researchers have turned that belief on its head in 2006, by genetically engineering mature human cells back into youth, turning them into stem cells. That discovery has earned Yamanaka his 2012 Nobel prize.
How Paradigms Advance
It is most illuminating to see how computers have advanced throughout the 20th century, and have constantly shifted from one paradigm to the other along the years. From 1900 to the 1930s, computers were electromechanical in nature: slow and cumbersome constructs with electric switches. As technology progressed and new scientific discoveries were made, computers progressed to using electric relay technology, and then to vacuum tubes.
One of the first and best known computers based on vacuum tubes technology is the ENIAC (Electronic Numerical Integrator and Computer), which weighed 30 tons and used 200 kilowatts of electricity. It could perform 5,000 calculations a second – a task which every smartphone today exceeds without breaking a sweat… since the smartphones are based on new paradigms of transistors and integrated circuits.
At each point in time, if you were to ask most computer scientists whether computers could progress much beyond their current state of the art, the answer would’ve been negative. If the scientists and engineers working on the ENIAC were told about a smartphone, they would’ve been completely baffled. “How can you put so many vacuum tubes into one device?” they would’ve asked. “and where’s the energy to operate them all going to come from? This ‘smartphone’ idea is utter nonsense!”
And indeed, one cannot build a smartphone with vacuum tubes. The entire computing paradigm needed to change in order for this new technology to appear on the world’s stage.
What does the Failure of the Paradigm mean? Essentially what it means is that we cannot reliably forecast a future that is distant enough for a paradigm shift to occur. Once the paradigm changes, all previous limitations and boundaries are absolved, and what happens next is up to grabs.
This insight may sound gloomy, since it makes clear that reliable forecasts are impossible to make a decade or two into the future. And yet, now that we understand our limitations we can consider ways to circumvent them. The solutions I’ll propose for the Failure of the Paradigm are not as comforting as the mythical idea that we can know the future, but if you want to be better prepared for the next paradigm, you should consider employing them.
Solutions for the Failure of the Paradigm
First Solution: Invent the New Paradigm Yourself
The first solution is quite simple: invent the new paradigm yourself, and thus be the one standing on top when the new paradigm takes hold. The only problem is, nobody is quite certain what the next paradigm is going to be. This is the reason why we see the industry giants of today – Google, Facebook, and others – buying companies left-and-right. They’re purchasing drone companies, robotics companies, A.I. companies, and any other idea that looks as if it has a chance to grow into a new and successful paradigm a decade from now. They’re spreading and diversifying their investments, since if even one of these investments leads into the new paradigm, they will be the Big Winners.
Of course, this solution can only work for you if you’re an industry giant, with enough money to spare on many futile directions. If you’re a smaller company, you might consider the second solution instead.
Second Solution: Utilize New Paradigms Quickly
The famous entrepreneur Peter Diamandis often encourages executives to invite small teams of millennials into their factories and companies, and asking them to actively come up with ideas to disrupt the current workings of the company. The millennials – people between 20 to 30 years old – are less bound by ancient paradigms than the people currently working in most companies. Instead, they are living the new paradigms of social media, internet everywhere, constant surveillance and loss of privacy, etc. They can utilize and deploy the new paradigms rapidly, in a way that makes the old paradigms seem antique and useless.
This solution, then, helps executives circumvent the Failure of the Paradigm by adapting to new paradigms as quickly as possible.
Third Solution: Forecast Often, and Read Widely
One of the rules for effective Forecasting, as noted futurist Paul Saffo wrote in Harvard Business Review in 2007, is to forecast often. The proficient forecaster needs to be constantly on the alert for new discoveries and breakthroughs in science and technology – and be prepared to suggest new forecasts accordingly.
The reason behind this rule is that new paradigms rarely (if ever) appear out of the blue. There are always telltale signs, which are called Weak Signals in foresight slang. Such weak signals can be uncovered by searching for new patents, reading Scientific American, Science and Nature to find out about new discoveries, and generally browsing through the New York Times every morning. By so doing, one can be certain to have better hunch about the oncoming of a new paradigm.
Fourth Solution: Read Science Fiction
You knew that one was coming, didn’t you? And for a good reason, too. Many science fiction novels are based on some kind of a paradigm shift occurring, that forces the world to adapt to it. Sometimes it’s the creation of the World Wide Web (which William Gibson speculated about in his science fiction works), or rockets being sent to the moon (As was the case in Jules Verne’s book – “From the Earth to the Moon”), or even dealing with cloning, genetic engineering and bringing back extinct species, as in Michael Crichton’s Jurassic Park.
Science fiction writers consider the possible paradigm shifts and analyze their consequences and implications for the world. Gibson and other science fiction writers understood that if the World Wide Web will be created, then we’ll have to deal with cyber-hackers, with cloud computing, and with mass-democratization of information. In short, they forecast the implications of the new paradigm shift.
Science fiction does not provide us with a solid forecast for the future, then, but it helps us open our minds and escape the Failure of the Paradigm by considering many potential new paradigms at the same time. While there is no research to support this claim, I truly believe that avid science fiction readers are better prepared for new paradigms than everyone else, as they’ve already lived those new paradigms in their minds.
Fifth Solution: Become a Believer
When trying to look far into the future, don’t focus on the obstacles of the present paradigm. Rather if you constantly see that similar obstacles have been overcome in the past (as happened with computers), there is a good reason to assume that the current obstacles will be defeated as well, and a new paradigm will shine through. Therefore, you have to believe that mankind will keep on finding solutions and developing new paradigms. The forecaster is forced, in short, to become a believer.
Obviously, this is one of the toughest solutions to implement for us as rational human beings. It also requires us to look carefully at each technological field in order to understand the nature of the obstacles, and how long will it take (according to the trends from the past) to come up with a new paradigm to overcome them. Once the forecaster identifies these parameters, he can be more secure in his belief that new paradigms will be discovered and established.
Sixth Solution: Beware of Experts
This is more of an admonishment than an actual solution, but is true all the same. Beware of experts! Experts are people whose knowledge was developed during the previous paradigm, or at best during the current one. They often have a hard time translating their knowledge into useful insights about the next paradigm. While they can highlight all the difficulties existing in the current paradigm, it is up to you to consider how in touch those experts are with the next potential paradigms, and whether or not to listen to their advice. That’s what Arthur C. Clarke’s first law is all about –
“When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.”
The Failure of the Paradigm is a daunting one, since it means we can never forecast the future as reliably as we would like to. Nonetheless, business people today can employ the above solutions to be better prepared for the next paradigm, whatever it turns out to be.
Of all the proposed solutions to the Failure of the Paradigm, I like the fourth one the best: read science fiction. It’s a cheap solution that also brings much enjoyment to one’s life. In fact, when I consult for industrial firms, I often hire science fiction writers to write stories about the possible future of the company in light of a few potential paradigms. The resulting stories are read avidly by many of the employees in the company, and in many cases show the executives just how unprepared they are for these new paradigms.
I was playing World of Warcraft – the famous Massive Multiplayer Online Role Playing Game (MMORPG) – last night, and became a member of a party of five players in order to complete a challenging dungeon. Normally, journeying together with four other people in a virtual world can be heaps of fun. The warriors hit monsters, the healers heal the warriors, and everybody is having fun together.
Well, not this time.
Halfway through the dungeon, one of the players began spouting some national slurs – “Russia rule you soon”, for one, and “Filthy Ukranian” among others. The response was pretty immediate – after a minute or two of shock, the offending player was kicked out of the party. We found another player in less than a minute and completed the dungeon at our leisure. The remarks, though, left an impression on me and made me think all through the evening about an interesting question: why don’t we see more breaches and break-ins from the physical world into the virtual one?
Perhaps the term “virtual worlds” needs to be better defined. After all, Facebook too is a virtual world, and we see people bringing their problems and biases from the physical world into Facebook all the time. World of Warcraft, though, much like other MMORPGs, is a different virtual world. It’s a simulation, in fact, of a fantasy world filled with dragons, dungeons and real monsters who would like nothing more than to chew on your virtual bones.
This detachment from reality is probably the most important difference between MMORPGs and Facebook: on Facebook, you’re supposed to ‘play’ yourself and emphasize your views on the physical world. MMORPGs, however, are viewed more as vacation-time from reality. You go to MMORPGs to escape the conflicts of the physical world, not to accentuate them. This common understanding among players helps ensure that few incursions between the two worlds occur.
It is also my belief (and I don’t know any research to support it, since the field of MMORPGs has largely been ignore by political and social scientists) that MMORPGs bring into the equation something that we humans sorely lack in the modern ages: an evil enemy. Namely, I’m speaking of the computer that is controlling the world and the monsters in it. Those monsters will kill you if you don’t get strong enough. They are the ultimate evil – they can’t be reasoned with, and you can’t deliberate with them. It’s a kill or be killed environment, in which you have to become stronger constantly just to survive.
Compare this black and white environment to the one we experience in the physical world. In past times, tribal and national leaders tried to paint their enemies with a good vs. evil color palette. Namely: we’re the good guys, and they’re the bad guys. This kind of stereotyping doesn’t really work so well anymore, now that you can read everywhere about the woes and dilemmas of the other side, and realize that they’re humans just like you are. But realizing and accepting this fact requires conscious effort – it’s so much easier to hate, demonize and vilify the other side!
What wonder, then, that players are so happy leaving behind the grey national animosities of the physical world, and fight the good fight in the virtual worlds?
Meaning for the Future
These thoughts are pretty preliminary and shallow, and I post them here only because they are important for our future. In a decade or two from now we will enter a world in which the virtual and the physical aspects become mixed together constantly. As I wrote in an earlier post, wearable augmented reality devices are going to transform every street and every walking lane into a dungeon or a grassland field filled with monsters and treasure.
The virtual world is different from the physical one in many aspects, but one of the most important is that virtual wealth is infinite and priceless. One can find enormous treasures in the virtual world, beat his virtual computer-controlled opponents time after time, and in the future also enjoy virtual love (or at least sex) with virtual entities.
But what is the meaning of life in a virtual world? And since we’re about to experience a mixed-reality world soon, we must also consider: how do we keep on providing meaning and motivation to everyone in it?
It is possible that, based on the lessons of World and Warcraft and other MMORPGs, the programmers of the mixed-reality world will put an emphasis on the creation of true evil: of evil ghosts and dragons, and a perpetual fight for (virtual) survival against those. Maybe then, when we’re confronted by a greater enemy, we’ll be able to overlook our religious, national and racial biases and come together to fight the good fight in a game that will span nations and continents.
Does the future of mixed-reality holds dragons in store for us all, then? One can only hope.
Everywhere you go, you can find scientists and engineers doing 3-D printing. They may be using it to print bridges over water, or buildings and houses, or even hearts and livers and skull parts. In fact, we’re hearing so much about 3-D printers creating the normal and ordinary stuff all over again, that it’s becoming pretty boring.
This, of course, is how technology makes progress: slowly, and with iterative changes being added all the time. We’re currently using 3-D printers just to create all the old stuff, which we’re used to. The makers and creators are mainly interested today in demonstrating the capabilities of the printers, and put less emphasis on actually innovating and creating items that have never existed before, and of course, the clients and customers don’t want anything too extraordinary as well. That’s the reason we’re 3-D printing a prosthetic ear which looks just like a normal ear, instead of printing a Vulcan ear.
What happens if we let go of the ordinary and customary, and begin rethinking and reimagining the items and organs we currently have? That’s just what Manu S. Mannoor, Michael C. McAlpine and their groups did in Princeton and Johns Hopkins Universities. They made use of a 3-D printer to create a cartilage tissue the shape of a human hear, along with a conductive polymer with infused silver nano-particles. The end result? A bionic ear that should look and feel just like an ordinary ear, but has increased radio frequency reception. It is not far-fetched to say that Mannoor and McAlpine have printed the first biological ear that could also double as a radio receiver.
Where else may we see such a combination between the biological and the synthetic? This is a fascinating thought experiment, that could help us generate a few forecasts about the future. If I had to guess, I would venture a few combinations for the next twenty years –
Radio-conductive bones: have you come for a hip replacement, and also happen to have a pacemaker or some other implant? The researchers will supply you with a hip-bone printed specifically for you, which will also contain conductive elements that will aid radio waves go deeper into the body, so that the implants can receive energy more easily from the outside by radio waves or induction of some kind.
Drug delivering tattoos: this item is not 3-D printed, but it’s still an intriguing combination of a few different concepts. Tattoos are essentially the result of an injection of nano- and micro-particles under the skin. Why not use specific particles for added purposes? You can create beautiful tattoos of dragons and princesses and butterflies that can also deliver medicine and insulin to the bloodstream, or even deliver adrenaline when pressed or when experiencing a certain electrical field that makes the particles release their load. Now here’s a tattoo that army generals are going to wish their soldiers had!
Exquisite fingernails: the most modern 3-D printers come with a camera and A.I. built-in, so that they can print straight on existing items that the user places in the printer. Why don’t we make a 3-D printer that can print directly on fingernails with certain kinds of materials? The fingernails of the future – which will be printed anew every day – might contain tiny batteries that will power smartphones by touch, or microphones that could record everything that happens around the user.
These are obviously just three rudimentary ideas, but they serve to show what we could gain by leaving behind the idea that new manufacturing technologies should adhere to the “old and proven”, and advance ahead to novel utilities.
In the end, the future is never just “same old same old”, but is all about shedding off the customs of the past and creating new ones. And so, if I had to guess, I would wager that such a unification of concepts into new and bizarre devices would give us a much more accurate view of the future than the one we gain in the present by showing how 3-D printers can build yet another house and another human organ.
What are your ideas for future combinations of biological and synthetic components? Write them down in the comments section!
Two weeks ago it was “Back to the Future Day”. More specifically, Doc and Marty McFly reached the future at exactly October 21st, 2015 in the second movie in the series. Me being a futurist, I was invited to several television and radio talk shows to discuss the shape of things to come, which is pretty ridiculous, considering that the future is always about to come, and we should talk about it every day, and not just in a day arbitrarily chosen by the scriptwriters of a popular movie.
All the same, I’ll admit I had an uplifting feeling. On October 21st, everybody was talking about the future. That made me realize something about science fiction: we really need it. Not just for the technological ideas that it gives us (like cellular phones and Tricorders from Star Trek), but also for the expanded view of the future that it provides us with.
Sci-fi movies and book take root in our culture, and establish a longing and an expectation to a well-defined future. In that way, sci-fi creations provide us with a valuable social tool: a radically prolonged Cycle-time, which is the length of time an individual in society tends to look forward to and plan for in advance.
Cycle-times in the Past
As human beings, and as living organisms in general, mother evolution has shaped us into fulfilling one main goal: transferring our genes to our descendants. We are, in a paraphrase of Richard Dawkins’ quote, trucks that carry the load of our genes into the future, as far as possible from our current starting point. It is curious realize that in order to preserve our genes into the future, we must be almost totally aware of the present. A prehistorical person who was not always on the alert for encroaching wolves, lions and tigers, would not have survived very long. Millions of years of evolution have designed living organisms so that they focus almost entirely on the present.
And so, for the first few tens of thousands years of human existence, we ran away from the tigers and chased after the deer, with a very short cycle-time, probably lasting less than a day.
It is difficult, if not impossible, to know when exactly we managed to strike a bargain with Grandfather Time. Such a bargain provided the early humans great power, and all they needed to do in return was to measure and document the passing of hours and days. I believe that we’ve started measuring time quite early in human history, since time measurement brought power, and power ensured survivability and the passing of genes and time measurement methodologies to the next generation.
The first cycle-time was probably quite short, lasting less than a full day. Early humans could roughly calculate how long it will take the sun to set according to its position in the sky, and so they could know when to start or end a hunt before darkness fell. Their cycle-time was a single day. The woman who wanted to know her upcoming menstruation period – which could lead to drawing predators and making it more difficult for her to hunt – could do that by looking at the moon, and by making a mark on a stick every night. Her cycle-time was a full month.
The great leap forward occurred in agricultural civilizations, which were based on an understanding of the cyclical nature of time: a farmer must know the cyclical order of the seasons of the year, and realize their significance for his field and crops. Without looking ahead a full year into the future, agricultural civilizations could not reach their full height. And so, ten thousand years ago, the first agricultural civilizations set a cycle-time of a whole year.
And that is pretty much the way it remained ever since that time.
Religions initially had the potential to provide longer cycle-times. The clergies have often documented history and made an attempt to forecast the future – usually by creating or establishing complex mythologies. Judaism has prolonged the agricultural cycle-time, for example, by setting a seven year cycle of tending one’s field: six years of growing corps, and a seventh year (Shmita, in Hebrew) in which the fields are allowed to rest.
“For six years you are to sow your fields and harvest the crops, but during the seventh year let the land lie unplowed and unused.” – Exodus, 23, 10-11.
Most of the religious promises for the future, however, were usually vague, useless or even harmful. In his book, The Clock of the Long Now, Stewart Brand repeats an old joke that caricaturizes with more than a shred of truth the difficulties of the Abrahamic religions (i.e. Judaism, Christianity and Islam) in dealing with the future and creating useful cycle-times in the minds of their followers. “Judaism,” writes Brand, “says [that] the Messiah is going to come, and that’s the end of history. Christianity says [that] the Messiah is going to come back, and that’s the end of history. Islam says [that] the Messiah came, and history is irrelevant.” [the quote has been slightly modified for brevity]
While this is obviously a joke, it reflects a deeper truth: that religions (and cultures) tend to focus on a single momentous future, and ignore anything else that comes along. Worse, the vision of the future they give us is largely unhelpful since its veracity cannot be verified, and nobody is willing to set an actual date for the coming of the Messiah. Thus, followers of the Abrahamic religions continue their journey into the future, with their eyes covered with opaque glasses that have only one tiny hole to let the light in – and that hole is in the shape of the Messiah.
Why We Need Longer Cycle-times
When civilizations fail to consider the future in long cycle-times, they head towards inevitable failure and catastrophe. Jared Diamond illustrates this point time and time again in his masterpiece Collapse, in which he reviews several extinct civilizations, and the various ways in which they failed to adapt to their environment or plan ahead.
Diamond describes how the Easter Island folks did not think in cycle-times of trees and earth and soil, but instead thought in human shorter cycle-times. They greedily cut down too many of the trees in the island, and over several decades they squandered the island’s natural resources. Similarly, the settlers in Greenland could not think in a cycle-time long enough to contain the grasslands and the changing climate, and were forced to evacuate the island or freeze to death, after their goats and cattle damaged Greenland’s delicate ecology.
The agricultural civilizations, as I wrote earlier, tend to think by nature in cycle-times no longer than several years, and find it difficult to adjust their thinking into longer cycle-times: ones that apply to trees, earth and evolution of animal (and human) evolution. As a result, agricultural civilizations damage all of the above, disrupt their environment, and eventually disintegrate and collapse when their surroundings can’t support them anymore.
If we wish to keep humanity in existence overtime, we must switch to thinking in longer cycle-times that span decades and centuries. This is not to say that we should plan too far ahead – it’s always dangerous to forecast into the long-term – but we should constantly attempt to consider the consequences of our doings in the far-away future. We should always think of our children and grandchildren as we make steps that could determine their fate several decades away from now.
But how can we implement such long-term cycle-times into human culture?
If you still remember where I began this article, you probably realize the answer by now. In order to create cycle-times that last decades and centuries, we need to visit the future again and again in our imagination. We need to compare our achievements in the present to our expectations and visions of the future. This is, in effect, the end-result of science fiction movies and books: the best and most popular of them create new cycle-times that become entwined in human culture, and make us examine ourselves in the present, in the light of the future.
Science fiction movies and stories have an impressive capability to influence social consciousness. Karel Capek’s theater play R.U.R. from 1920, for example, had not only added the word “Robot” to the English lexicon, but has also infected western society with the fear that robots will take over mankind – just as they did in Capek’s play. Another influential movie, The Terminator, was released in 1984 and has solidified and consolidated that fear.
Science fictions does not have to make us fear the future, though. In Japanese culture, the cartoon robot Astro-Boy has become a national symbol in 1952, and ever since that time the Japanese are much more open and accepting towards robots.
The most influential science fiction creations are those that include dates, which in effect are forecasts for certain futures. These forecasts provide us with cycle-times that we can use to anchor our thinking whenever we contemplate the future. When the year 1984 has come, journalists all over the world tried to analyze society and see whether George Orwell’s dark and dystopian dream had actually come true. When October 21st 2015 was reached barely two weeks ago, I was interviewed almost all day long about the technological and societal forecasts made in Back to the Future. And when the year 2029 will finally come – the year in which Skynet is supposed to be controlling humanity according to The Terminator – I confidently forecast that numerous robotics experts will find themselves invited to talk shows and other media events.
As a result of the above science fiction creations, and many others, humanity is beginning to enjoy new and ambitious cycle-times: we look forward in our mind’s eye towards well-designated future dates, and examine whether our apocalyptic or utopian visions for them have actually come true. And what a journey into the future that is! The most humble cycle-times in science fiction span several decades ahead. The more grandiose ones leap forward to the year 2364 (Star Trek), 2800 (Dan Simmons’ Hyperion Cantos) or even to the end of the universe and back again (in Isaac Asimov’s short story The Last Question).
The longest cycle-times of science fiction – those dealing with thousands or even millions of years ahead – may not be particularly relevant for us. The shorter cycle-times of decades and centuries, however, receive immediate attention from society, and thus have an influence on the way we conduct ourselves in the present.
Humanity has great need of new cycle-times that will be far longer than any that were established in its history. While policy makers attempt to take into account forecasts that span decades ahead, the public is generally not exposed or influenced by such reports. Instead, the cycle-times of many citizens are calibrated according to popular science fiction creations.
Hopefully, those longer cycle-times would allow humanity to prepare in advance to existential longer-term challenges, such as ecological catastrophes or social collapse. At the very same time, longer cycle-times can also encourage and push forward innovation in certain areas, as entrepreneurs and innovators struggle to fulfill the prophecies that were made for certain technological developments in the future (just think of all the clunky hoverboards that were invented towards 2015 as proof).
In short, if you want to save the future, just write science fiction!
I loved her, on the spot. There was something in her stance, her walk, her voice. Hesitantly, I approached and opened a light chat. There was an immediate connection, a feeling of rapport between us. Finally, I dared pop the question – “Do you want to meet again tomorrow?”
She went quiet for a second, then asked to see my social credit rating. I tried to keep my face still while I took out my smartphone and showed it to her.
She went quiet for more than a few seconds…
This system – a social credit rating – is in the process of being created and implemented today in China. If it works out well, it’s going to have an impact that will spread far beyond the People’s Republic, and may become part of our lives everywhere. Because, you see, this system might actually be a good idea – as long as we use it wisely.
What is a social credit rating? In a way, it’s similar to the ordinary credit history rating being used in America and other countries. Every person in America, for example, has a credit history that speaks volumes about their past behavior, how soon they return their loans, and how they handle their money. When one applies for a new loan, a mortgage or even for a new credit card, the banks and financial institutes take a good hard look at the inquirer’s credit history to decide whether or not they can be safe giving him that loan.
Up until today, only 320 million individuals in China had any kind of credit history, out of 800 million people registered in China’s central bank. Things are about to change, though, since the Chinese government is authorizing several companies to collect and compare information about the citizens, thus creating an omnipotent, omniscient system that assigns a “social credit rating” to anyone who uses any kind of online services, including dating sites like Baihe, and commercial sites like Alibaba.
And the Chinese people are really gobbling it up.
While it’s obviously difficult to know how the common person in the street is responding, it looks like the Chinese companies (again, under close scrutiny and agreement by the government) really know how to sell the idea to their customers. In fact, they’re letting the customers ‘sell’ the idea themselves to their friends, by turning the social credit rating into a game. People are comparing their ratings to each other, and are showing their ratings on their smartphones and their profiles on dating services. For them, it has become a game.
But it is a game with very serious consequences.
Her face fell when she saw my rating. I talked quickly – “I-It’s not what it looks like. You gotta understand, I didn’t have the money to repay Big Joe last week, but now I’m getting all the wages I was owed. Seriously, it’s OK. I’m OK financially. I really am.”
There’s no denying that credit history ratings can serve a positive purpose. They alert individuals and companies to the past history of con artists, scammers and generally unscrupulous people whom you’d rather not have any dealings with. The mere presence of a credit history rating could cause people to trust each other better, and also to behave themselves in their financial dealings. People from market societies tend to deal more fairly with strangers because they know their actions are always counted for or against them. A credit history rating essentially makes sure that everyone knows they are monitored for best behavior – and that’s a powerful force that can help maintain civil order.
It is also a powerful tool in the hands of a government that wants to control its citizens.
She bit her bottom lip, and her brow furrowed. She kept my smartphone in her hand, scrolling down quickly and reading all the fine details. Suddenly she raised her head and stared at me.
“You played Assassin’s Creed for one hundred hours last month?” she demanded to know. I nodded dumbly, and watched as the smile spread slowly on her lips. “I love that game! I play it all the time myself!”
I felt butterflies swimming across my vision. She was obviously The One for me. Such a perfect fit could never happen by chance. And yet, I felt I needed to check one last thing.
“Can I see your social rating too?” I asked timidly, and waited an eternity for her answer.
It’s pretty easy to understand how one’s credit history rating in America is determined. You just need to pay your bills in time in order to maintain a good credit history. A social credit rating, however, is a different thing altogether. At least one of the companies in charge of calculating it, does not agree to expose how the rating is determined, except that the calculation is based on “complex algorithm”. Which essentially means that nobody knows exactly how they’re being judged and rated – except for the big companies and the government.
Does that make you feel like the Chinese are entering into an Orwelian totalitarian rule? It should. There are persistent rumors that the Chinese social credit will be determined according to the users’ online activities in social media. When the Chinese government is in total control, who do you think will get a better social rating: the citizens who support the government unconditionally, or the dissidents who oppose the government’s doings?
In short, a social rating could be a great way for any government to control the population and the opinions and points of view it advocates and stands for. And since the social rating could be a dynamic and constantly changing parameter, it could change rapidly according to every new action a person takes, every sentence and cussword they utter. The government only has to set the rating algorithms – and the automated monitoring and rating systems will do the rest.
I walked back and forth in my small room, silently cursing myself for my foolishness. So what if her social rating was so low? She must have been a supporter of the opposition for it to drop by so much, but what of it? I’m not a political activist anyway. Why should I care?
And yet, I had to admit to myself that I cared. How could I go out with someone with that kind of a low rating? All my friends will know. They’ll laugh at me behind my back. Worse still, my own social rating would go down immediately. I will not only be the laughing stock of my class in the University – I would not even be legible anymore for a scholarship, and all my dreams for a higher degree would end right there and then.
I sighed, and sat back on the squeaky bed. She just wasn’t right for me, in this time in life. Maybe when I have a better social rating, to balance her own. Maybe the algorithms would change their decision about her someday.
But that would probably be too late for us.
The social rating system is currently voluntary, but within five years China is planning to rank everyone within its borders. It’s going to be interesting to see how it’s working out for the Chinese. And who knows? Maybe we’ll get to have a taste of the system as well, probably sooner than later. Whether that’s a good or bad thing, is still up to grabs.
A few days ago I wrote a post about the WHO’s declaration that processed meat can cause cancer in human beings. Since posts from this blog also appear on my Facebook page, and many people comment there, I noticed a curious phenomenon: the knee-jerk response of many commenters was to cast doubt on the results of the committee who reached these decisions. Some of the doubters hinted that the committee members had ulterior motives. Others contended that the studies the committee relied upon to reach a decision, could not distinguish between meat eating and many other lifestyle choices that could heighten the risk of cancer.
Many indeed were those who doubted the results, for many wide-ranging reasons. And yet, from reading the comments it’s quite clear that none of them knows who exactly are the committee members, or which 800 papers they relied upon to make a decision. The main objective of the comments was to disparage the results that stand in contrast with the commenters’ current way of life.
Now, I’ll say straight ahead that the transparency of the evaluation process is definitely at fault. I haven’t yet had any success in finding the names of the experts on the committee, nor details about the “800 different studies on cancer in humans” they examined, or how much weight each study carried for them. In a world of information and transparency, it seems almost ridiculous that a body such as the WHO does not provide easy access to these details to the public, so that independent researchers and thinkers can make their own evaluations.
All the same, the first wave of doubters that we face now are probably a sign for the near future of the meat arena. In fact, if we learn anything from the way other industry giants have dealt with uncomfortable scientific evidence in the past, it’s that the spreaders of doubt will soon become prevalent in social media and on radio and TV.
Doubt, Tobacco and Climate Change
In the middle of the 20th century, the tobacco industry found itself facing a difficult challenge. An increasingly large number of scientific studies revealed a connection between smoking and cancer. The tobacco companies turned to one of the leading PR firms of the day, Hill & Knowlton, which reframed the situation: the dilemma was not whether or not smoking causes cancer, but what the public thinks on the matter. A key memo emphasizes the real issue from their point of view –
“There is only one problem—confidence and how to establish it; public assurance, and how to create it.”
In other words, the tobacco industry realized that it needed to create doubt about the scientific evidence. To that end, the industry founded ‘independent’ organizations that ‘studied’ the subject and reached conclusions that had almost no relation with the scientific reality or consensus. The industry also supported and promoted scientists who were willing to talk on behalf of tobacco and to publish studies (shaky as they were) against the connection between smoking and cancer.
I’ll admit this accusation would’ve seemed much like a conspiracy theory, if not for the fact that the internal communications in the tobacco companies was eventually made public. The industry could not challenge the scientific evidence for more than a few decades. Eventually, at the end of the 1990s, forty six states filed a collective lawsuit against the four largest tobacco companies. The companies agreed to pay a large fine, to shut down their funded ‘independent’ research organizations like the Center for Indoor Air Research, and to make all the related documents available to the public. This is how we know today how the history of tobacco in America really looks like: a grim mix of propaganda and greed, which was spilled on the public by the big companies. Overall, the tobacco industry had worked actively to plant and promote disinformation which has significantly damaged the public’s capabilities to act in a legal and enlightened way against smoking. Since a billion people are smoking today worldwide, and as the life expectancy of smokers is ten years shorter than that of their friends, it can be said that the tobacco companies have cost humanity ten billions years of living.
That is a pretty hefty fee to pay.
We see the same strategy of doubt casting being used today by ExxonMobil to counter scientific evidence for global warming and climate change, with some of the scientists who spoke against the relation between tobacco and cancer also speaking against the relation between human activity and climate change.
And quite soon, we’ll probably see it in use by the meat industry as well.
Meat and Doubt
Already, the meat industry starts casting doubt on the committee’s conclusions. Shalene McNeill, director of Human Nutrition Research at the National Cattlemen’s Beef Association, had this to say on the WHO’s declaration –
“These are studies that draw correlation, not causation. So these are studies that cannot be used to determine cause and effect.”
Her point is well-known to all scientists who review these studies, so I can’t imagine any of them falling for this old trick in interpretation.
another statements made by the meat industry about the WHO’s ruling included “Dramatic and alarmist overreach”, which seems strange in light of the fact that similar conclusions about the connection between meat and cancer have already been reached by the American Cancer Society and the World Cancer Research Fund. So nothing dramatic or overreaching here. If anything, the WHO is just falling into the ranks of the current scientific understanding of the issue.
Nathan Gray, science editor in the popular FoodNavigator site, has reported that he has received a large number of responses from trade associations and PR agencies representing the meat industry last week. Most of these responses, according to Gray, claim that the committee’s findings are biased, and that “the science is undecided or misrepresented”.
In short, they’re all casting doubt. We’ve seen this strategy being used before. We’re seeing it again right now.
Conclusion and Forecast
You want a final forecast, don’t you? Well, here’s an easy one: unless some kind of a miracle happens, we’re going to see a lot of doubt mongering coming from the meat industry in the next few years. Get ready for it. It’s coming, and it’s also going to rely a lot on social media. Social media is the new communications arena, where anyone can level baseless accusations, spread rumors and thrive on attention. If ever there was a place almost designed for disinformation, this is it.
Get ready. The doubt industry is marshalling its forces once more.