A few days ago I decided that I wanted a new business card for the up and coming new year. I headed straight to Fiverr, and browsed through some of the graphic designers who offered their services for five dollars or more. After a few minutes, my choice was made: I decided to use the designer with more than a hundred of 5-star positive ratings, and literally no negative reviews at all.
Of course, the gig didn’t really cost five dollars. I added $10 to receive the source file as well, $5 for the design of a double-sided business card, and $5 for a “more professional work”, as the designer put it. Along with other bits, the gig cost $30 altogether, which is still a good price to pay for a well-designed card.
Then the troubles began.
I received the design in 24 hours. It was, simply put, nowhere near what I expected. The fonts were all wrong, the colors were messed up, and worst of all – the key graphical element in front of the card was not centralized properly, which indicates to me a lack of attention to details that is outright unprofessional. So I asked for a modification, which was implemented within a day. It was not much better than the original. At which point I thanked the designer, and concluded the gig with a review of her work. I gave her a rating of generally three stars – possibly more than I felt that her skills warrant, and wrote a review applauding her effort to fix things, but also mentioned that I was not satisfied with the final result.
An hour later, the designer sent me a special plea. She asked me, practically in virtual tears, to remove my review, telling me that we can cancel the order and go to our separate ways. She told me that her livelihood depends on Fiverr, and without high ratings, she would not be approached by other buyers in the future.
A discussion I had with a Fiverr service provider, who begged me to give her a higher rating
I knew that my money would not actually be returned to me, since Fiverr only deposits the return in your Fiverr account for the next gigs you will purchase from them.But seeing a maiden so distraught, and me having an admittedly soft heart, I decided to play the gallant knight and deleted my negative review.
And so, I betrayed the community, and added to the myth of Fiverr.
Lessons for the No-Managers Workplace
In December 2011, the management guru Gary Hamel published an intriguing piece in the Harvard Business Review called “First, Let’s Fire All the Managers”. In the article, Hamel described a wildly successful company – The Morning Star Company – based on a model that makes managers unnecessary. The workers regulate themselves, criticize each other’s work, and deliberate together on the course of action their department should take. Simply put, everyone is a manager in Morning Star, and no one is.
You should read the article if this interests you (and it should), but just to sum up – Morning Star has some 400 workers, so it’s not a small start-up, and the model it’s using could definitely be scaled-up for much larger companies. However, Hamel included a few admonishments, the first of which was the need for accountability: the employees in Morning Star must “deliver a strong message to colleagues who don’t meet expectations,” wrote Hamel. Otherwise, “self-management can become a conspiracy of mediocrity.”
The employees in Morning Star receive special training to make sure they understand how important it is that they provide criticism and feedback to other employees, and that they actually hurt all the other employees if such feedback is not provided and made public. Apparently the training works, since Morning Star has been steadily growing over the past few decades, while leaving its competitors far behind. In fact, today “Morning Star is the world’s largest tomato processor, handling between 25% and 30% of the tomatoes processed each year in the United States.”
Morning Star is a shining example for a no-managers workplace which actually works in a competitive market, since each person in the firm makes sure that others are doing their jobs properly.
But what happens in Fiverr?
Is Fiverr Broken?
I have no idea how many service providers on Fiverr beg their customers for high ratings. I have a feeling that it happens much more frequently than it should, and that soft-hearted customers like me (and probably you too) can become at least somewhat swayed by such passionate requests. The result is that some service providers on Fiverr will enjoy a much higher rating than they deserve – which will in effect deceive all their future potential customers.
Fiverr could easily take care of this issue, by banning such requests for high rating, and setting an algorithm that screens all the messages between the client and the service provider to identify such requests. But why should Fiverr do that? Fiverr profits from having the seemingly best designers on the web, with an average of a five stars rating! Moreover, even in cases where the customer is extremely ticked off, all that will happen is that the service provider won’t get paid. Fiverr keeps the actual money, and only provides recompensation by virtual currency that stays in the Fiverr system. This is a system, in short, in which nobody is happy, except for Fiverr: the customer loses money and time, and the service provider loses money occasionally and gets no incentive or real feedback that will make him or her improve in the long run.
Conclusion
As I wrote earlier, Fiverr could easily handle this issue. Since they do not, I rather suspect they like the way things work right now. However, I believe that sooner or later they will find out that they have garnered themselves a bad reputation, which will keep future customers away from their site. We know that great start-ups that have received a large amount of funding and hype, like Quirky, have toppled before because of inherent problems in their structures. I hope Fiverr would not fail in a similar fashion, simply because it doesn’t bother to winnow the bad apples from its orchard.
When Achariya, an ordinary woman from Cambodia got pregnant, she was scared out of her wits. Pregnancy can become a death sentence for women in developing countries, with every year more than half a million mothers dying during pregnancy or child birth. In Cambodia specifically, “maternity-related complications are one of the leading causes of death among women ages 15 to 49”, according to the Population Reference Bureau. Out of every 100,000 women delivering a baby, 265 Cambodian mothers do not make it out of the birth room alive. In comparison, in developed countries like Italy, Australia and Israel, only 4–6 mothers out of 100,000 perish during childbirth.
While there are many different reasons for the abundance in maternal mortality, a prominent one is chronic conditions like anemia caused by iron deficiency in food. Dietary iron deficiency affects about 60% of pregnant Cambodian women, and results in premature labor, and hemorrhages during childbirth.
There is good evidence that iron can leech out of cast-iron cookware, such tools can be too expensive for the average Cambodian family. But in 2008 Christopher Charles, a student from the University of Guelph had a great idea: he and his team distributed iron discs to women in a Cambodian village, asking them to add it to the pot when making soup or boiling water for rice. The iron was supposed to leech from the ingot and into the food in theory. In practice, the women took the iron nuggets, and immediately used them as doorstops, which did not prove as beneficial to their health.
Charles did not let that failure deter him. He realized he needed to find a way to make the women use the iron ingot, and after a conversation with the village elders a solution was found. He recast the iron in the form of a smiling fish – a good luck charm in Cambodian culture. The newly-shaped fish enjoyed newfound success as women in the village began putting it in their dishes, and anemia rate in the village decreased by 43% within 12 months. Today, Charles and his company are upscaling operations, and during 2014 alone have supplied more than 11,000 iron fish to families in Cambodia.
The Lucky Iron Fish in a gift package. Source: Wikipedia, by Dflock
Pace Layer Thinking
For me, the main lesson from the iron fish experiment is that new technology cannot be measured and analyzed without considering the way in which society and current culture will accept it. While this principle sounds obvious, many entrepreneurs overlook it, and find themselves struggling against societal forces out of their control, instead of adapting their inventions so that they be easily accepted by society.
We have here, in essence, a very clear demonstration of the Pace Layering model developed and published by Stewart Brand back in 1999. Brand distinguishes between six different layers which describe society, each of which develops and changes at a pace of its own. Those layers are, in order from the ones that change most rapidly, to the ones that are nearly immovable:
The upper layers are moving forward more rapidly than the lower ones. They are the Uber and Airbnb (commerce layer) that stand in conflict with the Government’s regulations (governance layer). They are the ear extenders (fashion layer) that stand in conflict with the unwritten prohibition to significantly alter one’s body in Western civilization (culture layer). And sometimes they are even revolutionary governmental models used to control the population, as did the communist regimes in USSR which conflict with the very biological nature of the human beings put in control over such countries (governance layer vs. nature layer).
As you can see in the following slide (originally from Brand’s lecture at The Interval), the upper layers are not only the faster ones, but they are discontinuous – meaning that they evolve rapidly and jump forward all the time. Unsurprisingly, these layers are where innovations and revolutions occur, and as a result – they get all the attention.
The lower layers are the continuous ones. Consider culture, for example. It is impressively (and frustratingly) difficult to bring changes into a cultural item like religion. It takes decades – and sometimes thousands of years – to make lasting changes in religion. Once such changes occur, however, they can remain present for similar vast periods of time. And some would say that religion and Culture are blindingly fast when compared to the Nature layer, which is almost impossible to change in the lifetime of the individual.
You can easily argue that the Pace Layer Model is flawed, or missing some parts. Evolutionary psychologists, for example, believe that our psychology is a result of our genetics – and thus would probably put some aspects of Culture, Commerce, Governance and even Fashion at the Nature level. Synthetic biologists would say that today we can play with Nature as we wish, and as a result the Nature level should be jumpstarted to an upper level. It could even be said that companies like Uber (Commerce level) are turning out to have more power than governments (Governance level). Regardless, the model provides us with a good standing point to start with, when we try to think of the present and the future.
What does the Pace Layer Model have to do with the smiling luck fish? Everything and nothing. While I don’t know whether Charles has known of the model, a similar solution could’ve been reached by considering the problem in a Pace Layer thinking style. Charles’ problem, in essence, revolved around creating a new Fashion. He had a hard time doing that without resorting to a lower level – the Culture level – and reshaping his idea in ways that would fit the existing culture.
Pace Thinking about the Israel-Palestine Conflict
We can use Pace Layer thinking to consider other problems and challenges in modern times. It’s particularly interesting for me to analyze about the Israel-Palestine ongoing conflict, from a layer-based point of view.
There is currently a wave of terrorist attacks in Israel, enacted by both Palestinians and Israeli-Arabs from East Jerusalem. I would put this present outbreak at the Fashion level: it’s happening rapidly, it’s contagious (more terrorists are making attempts every day), and it’s drawing all of our attention to it. In short, it’s a crisis which we should ignore when trying to get a better long-term view of the overall problem.
What are the other layers we could work with, in regards to the conflict? There is the Commerce layer, representing the trade happening between Israel and the Palestinian Authority. If we want to lessen the frequency of crises like the current one, we should probably find ways to increase trade between the two parties. We could also consider the Infrastructure and Governance layers, thinking about shared cities, buildings or other infrastructures.
Last but not least – and probably most importantly – we need to consider the Culture layer. There is no denying that some aspects of the conflict revolve around the religions and other cultural habituations of each side. When a young Israeli-Arab gets up from bed in the morning, feels repressed and decides to murder a Jewish citizen, we need to ask ourselves why the culture around him hadn’t encouraged him to turn to other means of expressing his anger, like writing a column in the paper, or getting into politics. So the culture must change – and we need to find ways to bring forth such a change.
Obviously, these preliminary ideas and thoughts are merely starting points for a deeper analysis of the problem, but they serve to highlight the fact that every problem and every conflict can be analyzed in several different layers, none of which should be ignored, and that the best solutions should take into consideration several different layers.
Conclusion
The Pace Layer model of thinking can be a powerful tool in the analysis of every challenge, and could be used in many different cases. We’ll probably use it in the future in other articles on this blog, to analyze different situations and crises and examine the deeper layers that exist under the most fashionable and rapid ones.
In the meantime, I dare you to use the Pace Layer model to consider problems of your own – whether they’re of the national kind or entrepreneurial in nature – and report in the comments section what you’ve found out.
Picture from Wikipedia, uploaded by the user Yerevanci
Today I would like to talk (write?) about the first of several different failures in foresight. This first failure – called the Failure of Nerve – had been identified in 1962 by noted futurist and science fiction titan Sir Arthur C. Clarke. While Clarke has mostly pinpointed this failure as a preface for his book about the future, I’ve identified several forces leading to the Failure of Nerve, and discuss ways to circumvent it, in the hope that the astute reader will avoid making similar failures when thinking about the future.
Failure of Nerve
The Failure of Nerve is one of the most frequent of failures when talking or writing about the future, at least in my personal experience. When experts or even laypeople are expressing an opinion about the future, you expect them to be knowledgeable enough to be aware of the facts and the data from the present. And yet, all too often, this expectation is smashed on the hard-rock of mankind’s arrogance. The Failure of Nerve occurs when people are too fearful of looking for answers in the data that surrounds them, and instead focus on repeating their preconceived notions – which might’ve been true in the past, but are no longer relevant in the present.
Examples for Failures of Nerve are sadly abundant. Many quote Simon Newcomb, the famous American astronomer, who declared that flying machines are essentially impossible, a mere two years before the first flight of the Wright brothers –
“The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”
However, this is not a Failure of Nerve, since in Newcomb’s time, the data from the scientific labs themselves was incorrect. As the Wright brothers wrote about their experiments –
“Having set out with absolute faith in the existing scientific data, we were driven to doubt one thing after another, till finally, after two years of experiment, we cast it all aside, and decided to rely entirely upon our own investigations.”
Newcomb’s Failure of Nerve appeared later on, when he was confronted with reports about the Wright brothers’ success. Instead of withholding judgement and checking the data again, Newcomb only conceded that flying machines may have a slight chance of existing, but they could certainly not carry any other human beings other than the pilot.
The first flight of the wright brothers – against the better judgement of the scientific experts of the time. Source: Wikipedia
A similar Failure of Nerve can be found in the words of Napoleon Bonaparte from the year 1800, uttered in reply to news regarding Robert Fulton’s steamboat –
“What, sir, would you make a ship sail against the wind and currents by lighting a bonfire under her deck? I pray you, excuse me, I have not the time to listen to such nonsense.”
Had the uprising emperor bothered to take a better look at the current state of steamboats, he would’ve learned that boats with “bonfires under their decks” were already carrying passengers in the United States, even though the venture was not a commercial success. Fulton went on to construct a steamboat (nicknamed “Fulton’s Folly”) that rose to fame, and in 1816 France finally recovered its senses and purchased a steamboat from Great Britain. Knowing of Napoleon’s genius in warfare, it is an interesting thought exercise to consider how history might have changed had the emperor realized the potential in steamboats when the technology was still emergent.
Is it possible that steamboats like this one would’ve changed the course of history, had Napoleon not been affected by the Failure of Nerve? Source: Wikipedia
How do we deal with a Failure of Nerve? To find the answer to that question, we need to understand the forces that make this failure so common.
Behind the Curtains of the Nerve
There are at least three different forces that can contribute to a Failure of Nerve. These are: selective exposure to information, confirmation bias, and last but definitely not least – the conservation of reputation.
The Force of Selective Exposure
Selective exposure to information is something we all suffer from. In this day and age, we have an abundance of information. In the past, news would’ve had taken weeks and months to get to us, and we only had the village elder’s opinion to interpret them for us. Today we’re flooded by information from multiple media sources, each of which with its own not-so-secret agenda. We’re also exposed to columns by social critics and other luminaries, and we can usually tell in advance how they look at things. If you read Tom Friedman’s column, you can be sure he’ll give you the leftist approach. If you open the TV at The Glenn Beck Program, on the other hand, you’ll get the right-winged view.
An abundance of information is all good and well, until you realize that human beings today suffer from a scarcity in attention. They can only focus on one article at a time, and as a result they must choose how to divide their time between competing pieces of information. The easiest choice? Obviously, to go with the news that support your current view on life. And that is indeed the way that many people choose – and understandably results in a Failure of Nerve. How can you be aware of any new information that stands in contradiction to your core beliefs, if you only listen to the people who repeat those same core beliefs?
Philip E. Tetlock, in his new book Superforcasting, tells about Doug Lorch, one of the top forecasters discovered in recent years, who has found a way to circumvent selective exposure, albeit at an effort. In the words of Tetlock (p. 126) –
“Doug knows that when people read for pleasure they naturally gravitate to the like-minded. So he created a database containing hundreds of information sources – from the New York Times to obscure blogs – that are tagged by their ideological orientation, subject matter, and geographical origin, then wrote a program that selects what he should read next using criteria that emphasize diversity. … Doug is not merely open-minded. He is actively open-minded.”
Of course, reading opposite views to the one you adhere to can be annoying and vexing, to say the least. And yet, there is no other way to form a more nuanced and solid view of the future.
Superforecasting: The Art and Science of Prediction. By Philip E. Tetlock and Dan Gardner
The Force of Confirmation Bias
Sadly, even when a person chooses to actively open his or her mind to different views, it does not mean that they will be able to assimilate the lessons into their outlook. As human beings, one is wired to –
“…search for, interpret, prefer, and recall information in a way that confirms one’s beliefs or hypotheses while giving disproportionately less attention to information that contradicts it.” – Wikipedia
The confirmation bias is well-known to any expecting future-parent. You walk around in the city, and you find that the street is choke-full of parents with strollers and babies. They are everywhere. You can’t avoid them in the streets, on the bus, and even at work you find that your co-worker had decided to bring her children to the workplace today. So what happened? Has the world’s birth rate doubled itself all of a sudden?
The obvious answer is that we are constantly influenced by confirmation bias. If our mind is constantly thinking about babies, then we’ll pay more attention to any dripping toddler crossing the road, and the memory will be etched much more firmly into our minds.
The confirmation bias does not influence only young parents. It has some real importance in the way we view our world. A study from 2009 demonstrated that when people were asked to read certain articles spend 36 percent more time, on average, reading articles that they agreed with. Another study from 2009 demonstrated that when conservatives are watching The Colbert Report – in which Stephen Colbert satirizes the part of a right-winged news reporter – they read extra-meaning into his words. They claimed that Colbert only pretends to be joking, and actually means what he says on the show.
How does confirmation bias relate to the Failure of Nerve? In a way, it serves to negate some of the bad reputation that the Failure of Nerve has garnered from Clarke. The confirmation bias basically means that unless we make a truly tremendous and conscious attempt to analyze the world around us, our mind will fool us. We’ll pay less attention to evidence that refutes our current outlook, and consider it of lesser importance than other pieces of evidence. Or as the pioneer of the scientific method, Francis Bacon, put it (I found this great quote in a highly recommended blog: You Are Not So Smart) –
“The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it.”
Can we fight off the influence of the confirmation bias over our thinking process? We can do that partially, but never completely and it will never be easy. Warren Buffett (third on the list of Forbes’ richest people in the world, and one of the most successful investors in the world) uses two means to tackle the confirmation bias: he specifically looks for dissenters and invites them to speak up, and (assumedly) he’s writing down promptly any piece of evidence that stands in contradiction to his current ideas. In the words of Buffet himself (quoted in TheDataPoint) –
“Charles Darwin used to say that whenever he ran into something that contradicted a conclusion he cherished, he was obliged to write the new finding down within 30 minutes. Otherwise his mind would work to reject the discordant information, much as the body rejects transplants.”
In short, in order to minimize the impact of confirmation bias, you need to remain constantly vigilant against the tendency to be certain of yourself. You must chase after those who disagree with you and seek their opinions actively, and perhaps most importantly: you should write it all down in order to distance yourself from your original perspective, and allow yourself to judge your thinking as though it were someone else’.
The Conservation of Reputation
One of the best known laws in the physical world is the Conservation of Mass. Only slightly less well-known is the law of Conservation of Reputation, which states that the average expert always takes the best of care not to lose face or reputation in his or her dealings with the media. Upton Sinclair had summed up the this law nicely when he wrote –
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
Sadly enough, most experts believe that revisions of past forecasts, or indeed any change of opinion at all, will diminish and tarnish their reputation. And so, we can meet experts who will deny reality even when they meet it face-to-face. Some of them are probably blinded by their own big ideas and egos. Others probably choose to conserve what’s left of their reputation and dignity at any cost, even as they see their forecasts shrivel and wither in the light of the present.
The story of Larry Kudlow is particularly prominent in this regard. Kudlow forecast that President George W. Bush’s substantial tax cuts will result in an economic boom. The forecast fell flat, and the economy did not progress as well as it did during President Clinton’s reign. Kudlow did not seem to notice, and declared that the “Bush Rush” is here already. In fact, in 2008 he proclaimed that the current progress of American economy “may be the greatest story never told”. Five months later, Lehman Brothers filed for bankruptcy, and the entire global financial system collapsed along with that of the U.S.
I am going to assume that Kudlow was truly sincere in his proclamations, but obviously many other experts will not feel the need to be as honest, and will adhere to their past proclamations and declarations come hell or high water. And if we’re totally honest, then it must be said that the public encourages such behavior. In January 2009, The Kudlow Report (starring none other than Kudlow himself) began airing on CNBS. Indeed, sticking to your guns even in the face of reality seems to be one of the most important lessons for experts who wish to come up with the upper hand in the present – and assume correctly that few if any would force them to come to terms with their forecasts from the past.
Conclusion
In this text, the first of several, I’ve covered the Failure of Nerve in foresight and forecasting. The Failure of Nerve was originally identified by Arthur C. Clarke, but I’ve tried to make use of our current understanding of behavioral psychology to add more depth and to identify ways for people to overcome this all-too-common failure. Another book which has been very helpful in this endeavor was the recently published Superforecasting by Philip E. Tetlock and Dan Gardner, which you should definitely read if you’re interested in the art and science of forecasting.
There are obviously several other failures in foresight, which I will cover in future articles on the subject.
Yesterday I suggested a scenario about the Skarp laser razor campaign, in which the new device disrupts the current shaving industry giants. Well, that was yesterday. Less than 24 hours after I published the piece in this blog, Kickstarter suspended (a polite word for “dumped”) the project. The people behind Skarp jumped ship immediately to Indiegogo, and seem to be doing quite well in there – gathering approximately $10,000 every hour, for the past ten hours.
There have been several accusations by so-called experts and professional experts in the field of lasers and physics, regarding the feasibility of the laser razor. And yet, the suspension by Kickstarter was formally because of a very different reason: it turns out the Skarp team did not have a working prototype. Or maybe they did, but it was working so haphazardly that it could not be used for actual shaving.
So what’s going on here? Don’t the folks at Kickstarter consult experts before they agree to take up projects that may be physically impossible?
I believe they do not, and that’s generally a good thing.
In order to understand why I say so, let’s first try to see what purpose Kickstarter and crowdfunding platforms as a whole serve in society.
The Three Steps of Innovation
We often hear of the entrepreneur who had an amazing idea. A truly breathtaking invention formed in his mind, and he immediately proceeded to make it a reality, earning himself a few billion dollars and a vacation in the Bahamas on the way.
That, at least, is the myth.
In reality, innovation is based on three distinct steps:
Recombination of existing concepts into many new ideas;
Finding out which ideas are good, and which aren’t;
Rapidly iterating a good idea until it becomes an excellent one.
The Polymerase Chain Reaction (PCR) is an example for a unique recombination of existing concepts that changed the world. The PCR device is used in nearly every biological lab as part of the work needed to sequence DNA, to create new DNA strands, and genetically engineer bacteria, plants and even human cells. The technique was invented by Kary Mullis, who won the 1993 Nobel Prize in Chemistry for it, ‘simply’ by recombining existing techniques and automating them to a degree.
Many other winning inventions are in fact a recombination of existing ideas. Facebook, for example, relies on the recombination of a social network, the World Wide Web, smartphones, image and video storing, hashtags, and many others. Similarly, autonomous (driverless) cars are a recombination of computers, sensors, image processing, GPS, etc.
Since we’re constantly innovating, dozens (and sometimes hundreds and thousands) of new ideas are being added to the mix every year, and entrepreneurs are trying to recombine them in different and exciting ways to create new inventions. This is the first step of innovation: the frantic recombination of existing ideas by inventors from around the world.
The only problem is, most of these new inventions are, well, rubbish.
In his book “How to Fly a Horse”, Kevin Ashton (the inventor who gave the Internet of Things its name) details what happens to newly patented inventions in at least one firm – Davison Design. For the past 23 years, Davison mainly took money from customers to register their patents. Overall, its revenues equaled $45 million a year, with an average of 11,000 people signing with the company. How many people actually made any money from their patents and inventions? Altogether, only 27 people have seen any money out of their patents. The statistics, in short, are grim for any inventor. You may think the market is eager to use your new idea, but you can never tell for certain until the product is actually on the market. In fact, Shinkhar Ghosh from Harvard Business School has discovered that, “About three-quarters of venture-backed firms in the U.S. don’t return investors’ capital”. So nobody knows which idea is going to be any good: not even the big venture capitalists who invest millions of dollars in those ideas.
This is where the second part of innovation comes in: we have to winnow the good ideas from the bad ones. In the past, this function was only performed by government grants and investors. Distinguished committees would go over hundreds and thousands of idea submissions, and select the ones that seemed to have the best chance for success. Unfortunately, such committees are hard-pressed to support all the applicants, and as a result, 98-99% of ideas are refused funding.
Consider, on the other hand, Kickstarter and other crowdfunding platforms. In Kickstarter alone, 43% of campaigns reach their goals and obtain the money they needed to make their vision a reality. In a way, crowdfunding allows inventors to test their ideas: does the public want this new invention? Is it any good? Are people willing to pay for it… even before the factories have received the million dollar contract to manufacture all the parts?
In that way, crowdfunding platforms enable innovation by streamlining the second step: distinguishing the good ideas from the bad ones. And once a good idea has been found and supported – whether it’s an ice chest with a USB charger, or a pillow that covers the user’s head completely – the inventor keeps upgrading and changing the product so that it becomes better with each iteration. This is the reason that iPhone 6S is so much better than the original iPhone.
Innovation is the steppingstone on which our modern day society is built. Innovation leads to increased productivity, and as Paul Krugman says – “Productivity isn’t everything, but in the long run it is almost everything.” Innovative new companies are responsible for the majority of new jobs in the United States, and innovative ‘crazy’ ideas – the kind only few dared to support when they were originally proposed, like Airbnb or Google – have led to wholesale changes in the way society behaves.
Today’s new Google or Airbnb would not have had to look for elite investors: they could’ve went to the crowdfunding platforms to ask for assistance, and their chances would’ve been much higher to receive funding, at least in principle.
That is why Kickstarter is so important for innovation and for modern society: it allows the public to support many more innovators than ever before. And while quite a few of them are going to fail (probably most of them), the ones who make the big breakthroughs are going to change society. At the very least, even the fluked campaigns show the rest of us the value of some ideas. Overall, crowdfunding platforms move society forward.
The Bad Apples
“That is all just swell,” you might say now, “but how can we be sure that the projects on Kickstarter are not a scam? How can we know for sure that the Skarp laser razor isn’t a scam? The experts were all against it!”
Well, here’s a newsflash: when it comes to innovation, you can’t always rely on the experts.
There are plenty of examples that support this statement. Both Lord Kelvin (noted British Physicist) and the great astronomer Simon Newcomb dismissed any attempt to build a heavier-than-air flying machine, a mere two years before the Wright brothers demonstrated the first successful airplane. The British Royal Astronomer Richard van der Reit Wooley has declared confidently that “Space travel is utter bilge” – one year before Sputnik orbited the Earth. In fact, experts are wrong so often about the limits of possibility, that Arthur C. Clarke has issued his First Law about them –
Arthur C. Clarke’s First Law. Originally from IZQuotes
In short, experts can be wrong, too, even in matters as rigid as the laws of nature and the ways we can manipulate them. And it is so much easier to get social developments and innovations wrong, since there is no perfect model of the human mind or of society. And thus, no expert would’ve forecast with certainty that people will upload their photos so that millions can see them (Facebook, Flickr, Instagram), or share their houses (Airbnb) and cars (Uber) with total strangers. And yet, these innovative start-ups made it into existence, and changed the world.
That does not mean, of course, that the public should support every wily promise on Kickstarter. In fact, I think Kickstarter did a good thing when they removed the Skarp project because the inventors had no fully working prototype. In the end, crowdfunding platforms need to balance between the desire to protect their users from scams, and the fact that it’s very difficult to distinguish between scams and some extremely innovative ideas. At least in this case, it seems Kickstarter decided to err on the side of caution.
Conclusion
While many are asking whether the Skarp laser razor is a scam, it’s the wrong question. The real question is what purpose Kickstarter and other crowdfunding platforms should have in our modern society, and the honest answer is probably that the users of these platforms have a better chance of seeing their money dissipating into thin air – but altogether that’s a good thing, since more innovators overall get supported – and the few who succeed, change the world.
So go ahead: support Skarp on Indiegogo, or any other crazy idea on Kickstarter, Tilt and the other crowdfunding platforms out there. Buy that new (barely functioning) 3D-printer, the shiniest (and fragile) aerial drone, or that dream-reader that doesn’t really work. Go ahead – now you have the justification for it: you’re promoting innovation in society. Or in other words – bring on the scams!
Shaving is one of the great hardships of my life (and I guess I should consider myself lucky that this is one of my top worries). Up until recent years there have only been two giants in the shaving market: Schick and Gillette. Both are engineering their razor blades with space-age technology, promising you a blade that looks and feels as if it were found floating in space, shining magnificently in the Sun’s bright rays.
And it stings. Oh, how it stings my skin.
Both companies are trying to minimize cuts to their customers’ skin, obviously, but getting the nicking frequency down to zero is a daunting task, and probably impossible. We’re dealing with blades here, after all, sharpened to the point where they could (allegedly) cut air molecules in twine. As the book of Proverbs admonishes us: “Can a man carry fire in his lap, without burning his clothes?”
I would think that the burning clothes would be of the least concern to the guy carrying fire in his lap (please don’t do that), but the point is clear. You play with fire, you get burned. You play with razors, you get cut.
Well, then, why don’t we change the paradigm of using a razor blade for shaving? That’s exactly the idea behind the Skarp Razor project, which has recently surged to new heights on everybody’s favorite crowdfunding platform: Kickstarter.
The basic idea is pretty simple. Instead of blades, the Skarp ‘razor’ is utilizing a small laser beam with a wavelength that was selected specifically to cut human hairs. It does not cut or burn the skin, needs no shaving foam, and only requires one AAA battery every month. Those, at least, are the promises on the campaign site.
The Skarp Laser Razor – a virtual demonstration, from the Kickstarter campaign site.
The inventor behind the new blade, Morgan Gustavsson, has worked in the medical & cosmetic laser industry for three decades, and invented and patented the most common method for hair removal using laser in cosmetic beauty salons. Now he’s perfected and miniaturized the technology (again, according to the campaign’s claims which should be taken with a grain of salt) to bring it to everyone’s households.
If the Skarp Razor actually delivers on the promises made, the consequences would be used, and would essentially disrupt the stagnated shaving industry. Schick and Gillette have both competed under a very limited paradigm: shaving is to be done with blades only. Their entire business model revolves around the sale of high-priced blades. How can they handle a competitor that sells only one razor that should last for nearly a lifetime of shaving?
Short answer: they can’t, at least not under their current business model. Unless they find a new breakthrough technology of their own, their business model will be disrupted within a year, and they may well find themselves on the ropes in five years or less. This may be yet another Kodak Moment: a huge industry giant in its field, which gets disrupted following an innovation that reaches to the masses (digital cameras in smartphones), and declares bankruptcy five years later.
The possible disruption of this $4.13 billion market reveals an important principle of today’s industry, which has been mentioned before by Peter Diamandis, founder and chairman of the X-Prize foundation and co-founder of Singularity University: “If you don’t disrupt yourself, somebody else will.”
This principle is particularly relevant in the case of Schick and Gillette. The two giants have not faced any real competition except for each other for a long time now, and were thus unwilling to change their basic operating paradigms. They innovated, decorated and re-innovated their blades, but they did not find new ideas and concepts to re-think the process of shaving. Now, when the laser blade makes an appearance, they will need to frantically look for new answers for the threat.
Of course, nobody can forecast the future accurately, and the new laser shaving technology defies any attempt at foresight right now because we don’t know how it works exactly. Furthermore, the initial product that will be delivered to consumers next year is bound to be in a preliminary state: primitive and rough, and almost certainly disappointing for the wide public. The Skarp 2.0 will be infinitely better and more suitable for the needs and wishes of the consumers – but only if the company survives the first disappointment.
Conclusion
We can’t know yet whether the Skarp Razor is about to disrupt the shaving industry, especially since at the moment it’s no more than a promise on a crowdfunding site. However, if the invention does have merit and proves itself over the next year, the shaving industry giants will find themselves in a race against a new technology that they were not prepared for. I, for one, welcome such competition that will lower the prices of blades, and force the old guard to re-innovate and rethink their existing products and business models. I don’t envy the people at Gillette and Schick, though, for whom the next decade is going to be a hair-raising rollercoaster.
One of the complaints I hear most often from concerned parents, is that their kids spend most of their time in the virtual world. Their eyes are constantly glued to their smartphone’s screen.
“How can those kids live like that?” They demand to know. “Are we raising a new generation of zombies, totally dependant on their screens?”
My answer, always, is to remind them just how recently ago smartphones appeared on the world stage. Until 2007, there were no smartphones for the public. That means that this innovation is basically eight years old – a ridiculously short period of time compared to the history of humanity, or even to disrupting innovations like trains or cars. We’re still figuring out how to use the smartphones, well, smartly, and how to engineer our gates into the virtual world. And I tell those concerned parents that in ten years time, their children won’t look into their smartphones to find the virtual world, but will find the virtual world coming to them instead, unbidden.
To understand what I’m talking about, you just need to take a look at one of the hottest scenes in technology today: the virtual and augmented realities (VR and AR). Devices like Oculus Rift, Vive and Samsung Gear VR are coming to the consumer market in this year and the next, and the experience they provide is like nothing we’ve seen before. Trust me on this one: I’ve tried both the Rift and the Gear VR, and found myself swimming in the ocean with whales, visiting Venice, and running from real-life monsters in a temple… without actually getting up from my chair.
A trailer sample of the new generation of VR headsets: the HTC Vive, created by HTC and Valve
The forecasts for the virtual reality are incredibly optimistic, with Business Insider estimating that shipments of VR headsets will double in number every year, and will create a $2.8 billion hardware market by 2020. The Kzero consulting firm has forecast that annual revenues for VR software will reach $4.6 billion by 2018. This growth rate leaves the iPhone’s far behind, and will mean that – if those forecasts are anywhere near accurate – VR is about to take the world by storm in the next three years.
A forecast by Business Insider for the near future of VR devices. Notice the 99% cumulative annual growth rate – which essentially means a doubling of the number of shipments every year.
For myself, I’m still hesitant to believe that the VR market can rise so rapidly to prominence. The VR devices, while creating beautiful sceneries for the users to explore, are still cumbersome to wear on the face, and leave you disconnected from your immediate surroundings. So I prefer to stick to the old adage (allegedly by Arthur C. Clarke, and later proven by research in foresight) – “Experts are too optimistic in the near future, and too pessimistic in the long-run.”
These limitations will change in the future, and will most probably lead to the creation of augmented reality (AR) devices, which will look more like a normal pair of glasses, but with the pictures being displayed on the glasses themselves. In that way, the user will be able to see the physical world, along with the virtual world being overlaid on it.
Such AR glasses as described are already in existence, though they are still quite limited in capabilities. The Lumus glasses do just that, as do the Meta glasses. While both are still clunky, cumbersome, and have a limited field of view, they’re the early birds in the AR-Glasses field. If we assume that technology will keep on progressing (and honestly, I can’t see a way for it to stop!), we can be sure that the next AR-Glasses will be thinner, more energy-efficient, and more usable in general.
Let’s talk a bit about the games that AR and VR could open up for us in the future.
Gaming and VR / AR
Using VR for gaming is a no brainer. In fact, that’s the main use analysts are thinking about for VR in the next five years. Imagine running in the virtual landscape of Azaroth in World of Warcraft, or climbing the virtual towers and cathedrals of Paris in Assassin’s Creed. Those are experiences that will make the hardcore gamers flock to VR.
However, I would like to consider a different sort of gaming – one that might be accomplished by means of AR. The gamer of the not-so-far-away-future may actually be the athletic sort, because many games would be played on the streets of the city. By using AR-Glasses, every player would see a different image of the street: some will see the street as a dungeon with a dragon at its end, while others will find themselves forced to evade virtual deadly robots on the prowl, and still others would chase virtual butterflies on the pavement. Admittedly, that’s one crowded street!
Ok, this idea sounds a bit silly when you consider all the human congestion and potential traffic accidents that could occur, but there is definitely a case for streets and physical infrastructures that would be used as playing ground for the hard-core gamers. Even ‘soft gamers’ like most of us could find themselves taking a walk or a jog in seemingly-ordinary streets, with the AR-Glasses in our eyes turning the jog into a run from a dragon (with extra points if you make it out safely!) or involving some interesting activity while walking, like finding and picking up virtual playing cards on the pavement.
There are tantalizing hints in the present for this sort of outdoors-gaming. The “Zombies, Run!” game for the smartphone, is all about being chased by zombies in the real world. The zombies, of course, are virtual and you can only hear them behind you as you run, with the narrator giving you missions. Also, the more you run, the more supplies you collect automatically to build up your base. Another app, by the Mobile Art Lab in Japan, lets you see butterflies through your iPhone’s camera, and swipe at them to catch them – and turn them into discount coupons for restaurants.
Perhaps the most impressive example (although it’s more of a publicity stunt than anything else) of what augmented reality could do for the gaming world has been shown recently by Magic Leap – an AR company, obviously. Take a look!
Obviously, these are only hints for the future, but they’re pointing at an amazingly colorful and fascinating future for us all. The virtual world will no longer be far away from us, or force us to take our smartphones from our pockets. Rather, it would be all around us, and we’ll be able to see and hear it via the AR-Glasses and earbuds that we carry all the time.
The Challenges
Why isn’t this future not here by now? The challenges can be divided into two sorts: technological challenges and societal ones.
The technological challenges consist mainly of battery limits, which have been the ban of smartphones and other wearable computing so far. In the case of highly-sophisticated equipment such as AR-Glasses, the size of the projectors that send pictures to your eyes or onto the glasses is also a problem, and makes for extremely unfashionable glasses. Interestingly, the computing power does not seem to be a real challenge on its own, since AR-Glasses and other wearable computing devices could use the smartphone in one’s pocket to do most of the toughest computing tasks for them… which brings us back to the need to invent more efficient and long-lasting batteries for the smartphone as well.
None of these technological challenges represents an impassable barrier. In fact, if there’s one thing we can promise, it’s that future devices will have more efficient batteries, and will have the potential to be smaller. The trends indicate clearly that batteries are rapidly making progress towards better energy density.
The other big challenge is the societal one, and this is where Google Glass crashed into a wall. People simply did not like the fact that the person they’re speaking with could take a picture or a video of them at any time, or may even watch porn during a face-to-face conversation. The design of the Google Glass itself did not do anything to ameliorate those anxieties, and thus people just stopped using the Glasses to avoid becoming social pariahs.
While many believe the Google Glass has completely failed, we must remember that every device begins as a partial failure, since nobody knows how it will be used or how people will react to it. Google Glass was an experiment in design, and Google is now working relentlessly towards Google Glass 2.0, which will fit better with people’s desires and uses.
In short, while there are still challenges to the AR scene, they will be solved sooner or later. Any other conclusion forces us to think that somehow technology will cease to evolve and that companies will stop adapting their products to the consumer market, and I don’t see that happening anytime soon.
Conclusion
There are plenty of uses for virtual and augmented realities other than gaming, and in future posts we’ll deal with them as well. For now, I hope I’ve convinced you that at least part of the gaming activity would not take place solely in front of a screen, but in the streets and the parks. It’s going to be a pretty interesting world to live in, full of colors and messages and experiences that will blend seamlessly with the physical world.
While visiting the Roger Williams Park Zoo in Rhode Island, I happened to take this photo of genetically modified pumpkins displaying a wide range of advertising materials, apparently for the corporate sponsors of zoo activities.
A Genetically Engineered Pumpkin Advertisement, from the Roger Williams Park Zoo in Rhode Island. (Well, they’re more of the ordinary painted/carved pumpkins, but it sounds way cooler when you think they might be engineered to produce these writings)
Well, obviously the pumpkins aren’t actually genetically modified – they were just painted or sculpted by human artists – but in the rate genetic engineering is progressing, it’s quite possible that in a few decades we will have genetically modified fruits and vegetables that actually display readable advertisement on them as they grow up.
Now wouldn’t that be interesting?
I decided to take this chance and consider innovative ways in which future GMOs (Genetically Modified Organisms) could be used to promote and advertise products, ideas and corporates. In order to do that I utilized a fascinating systemic thinking system for innovation around which an entire consulting company called SIT (Systematic Inventive Thinking) was founded.
The principles of the SIT system have been described in a 2003 article in Harvard Business Review. In short, the main idea is to limit your creativity instead of trying to stretch it sky-high. Why is that so important? Consider that you’re on a first date, and the girl (or boy) is leaning forward across the table and is asking you that ages-old question: “Tell me about yourself!”
If you’re like most human beings, you probably freeze in complete bewilderment, unsure where to begin or to end, and what you should actually talk about. You’re lost in the chaos of your own mind, sinking below the waves of many thoughts and impulses: should you tell her about your trip to India? Or maybe about your ambitions for the future? Or maybe she really wants to hear about your bar-mitzvah?
Coming up with creative and innovative ideas is similar to dating, at least in this view. Many executives tell their staff to find and implement creative ideas in their product, leaving them floundering and resentful. Many (too many) creativity workshops look that way too: with round tables of employees and executives who are told to be creative and just to “think up a new innovative product for the company!”
Such exercises rarely lead to good results. At best, the participants fall back on whatever ideas they’ve read or thought about before, and almost no new or innovative notions are being produced at those meetings.
Now consider the alternative dating scene: your date asks you a very simple question – “What did you eat this morning?” In this case, the answer is clear. You have a starting point that is safe and sound, and while admittedly it is not very interesting, the conversation and the jokes can start flowing from that point onwards. It works the same way with creativity: by putting constraints on your thinking process in a systematic fashion, you’re actually capable of analyzing the situation in an orderly way, and develop each innovative case fully at a time.
The SIT method places constraints over the innovation process by forcing the thinkers to consider innovative changes to the current product in only five different directions: subtraction, multiplication, division, task unification, and attribute dependency. Let’s go over each one to think up innovative ways GE plants could be used for advertisement.
SIT Thinking Tools
Subtraction
Subtraction means that instead of our natural tendency to add features to an existing product, we remove existing features, particularly the kind that seem vital and necessary.
How does this thinking tool relate to GMO? Well, what would happen if we were to engineer a fruit without its skin or outer covering? The skin of the fruit obviously serves to protect the soft and squishy interior, so it’s definitely an important part of the product. However, maybe we could make the skin thinner and translucent, so that the consumer would see what they’re getting inside the fruit: they’ll see whether the banana has dark stains on its edible part, and if the tomato is rotten or has worms. That would certainly be an interesting advertisement maneuver: “We don’t have anything to hide!”
Multiplication
By applying the multiplication thinking tool, we multiply – add more copies of – certain existing components of the product, but then alter them in a significant way. Gillette’s double-bladed razor is a well-known example: they added an extra blade, and then found a different use for it on the other side of the razor.
How about, then, that we engineer the fruit to contain more seeds – but ones that are actually viable, and grow into some interesting and different kinds of fruit? The fruit’s manufacturer could bring the fruit to market as a tool for teaching children about the natural world, and even create a competition to find that “one golden seed” hiding in every one fruit out of a hundred, and out of which a truly extraordinary fruit will grow.
Division
The division tool makes us divide the product into its separate components – and then recombine them in some new way. In the case of genetically modified fruit, we can roughly separate the ‘product’ into seeds, edible flesh, skin and a stem. How can we mix the three to make the final product more valuable for advertisers? Here’s an idea: make the seeds grow on the surface of the fruit, but make them as small as speckles, adding a shine to the fruit. Or maybe make the stem go through the entire fruit, like a skewer, and promote the fruit as one that can be easily roasted over a grill.
Task Unification
Which two tasks can be unified into a single component of the fruit? This one is easy: make the stem tasty, so that it can be eaten as a snack next to the fleshy fruit. One can also imagine fruits that contain therapeutic materials, so that eating them serves a double purpose: get thin, and get healthy.
Attribute Dependency Change
The components and attributes of every product depend, in part, on its environment. Shoes for girls, for example, often come in pink (attribute: color). Watermelons are often sold in summertime, which is another relation between an attribute (time of sale) and the product.
Using this thinking tool, we can really go wild. If we only focus on color as an attribute, we can engineer fruit that changes its color visibly when it’s infected by certain bacteria, or that its color can tell when the fruit was picked up from the field, assuring the consumer that they’re getting fresh produce. And this is just the beginning, since we can also play with the smell, touch, and even weight and size of the fruit. So many opportunities here!
Conclusion
You may or may not like the ideas I gave for genetic engineering of plants. Regardless, this post was primarily an exercise in innovative thinking meant to provide a sneak peek at a wonderful methodology for innovation. You are warmly invited to suggest more ideas for genetic engineering of plants in the comments section, using the SIT methodology as a guide. And of course, you can use the principles of the SIT Methodology to innovate your own ideas for a product, service or company.
I’m sure you’ll make good use of the methodology, and will discover that innovating under constraints is as useful as it is fun.
Today, the Nobel Prize winners in the field of medicine were announced. All three winners are esteemed scientists who have discovered “therapies that have revolutionized the treatment of some of the most devastating parasitic diseases”, according to the Nobel committee. This is doubtlessly true: two of the winners’ discoveries have led to the development of a drug that has nearly brought an end to river blindness; the third scientist developed a drug that has reduced mortality from malaria by 30 percent in children, and saves over 100,000 lives each year.
I could go on about the myriad of ways in which medicine is improving the human condition worldwide, or about how we’re eradicating some diseases that have inflicted the human race since times unknown. I won’t do that. The progress of medicine is self-evident, and in any case is a matter for a longer blog post. Instead, let us focus on a different venture: the attempt to forecast the Nobel Prize winners.
The Citation Laureates
Every year since 2002, the Thomson Reuters media and information corporation makes a shot at forecasting the Nobel laureates. To that end, they analyze the most highly cited research papers in every field, and the authors behind them. One’s prestige as a scientist largely comes from high citation rate – i.e. the number of times people have referred to your work when conducting their own research. It’s therefore clear why this single simple parameter, so easily quantified, could serve as a good base for forecasting the annual Nobel winners.
So far, it looks like Thomson Reuters have done quite well with their forecasts. In every year except 2004, they have successfully identified at least one Nobel Prize winner in all the scientific fields: Physiology or Medicine, Physics, Chemistry and Economics. Overall, Thomson Reuters has “correctly forecast 21 of 52 science Nobel Prizes awarded over the last 13 years”.
It is fascinating for me that by working with tools for the analysis of big data, one could reach such a high rate of success in forecasting the decisions made by the Nobel committees. But here’s the deeper issue, in my opinion: Thomson Reuters clearly intends only to forecast the Nobel winners – but is it possible that their selection is more accurate than that of the Nobel committee?
The Limits of Committees
How is the Nobel Prize decided? Every year, thousands of distinguished professors from around the world are asked to nominate colleagues who deserve the prize. Each committee for the scientific prizes ends up with 250-350 nominees, whom they then screen and analyze in order to come up with only a few recommendations that will be presented to the 615 members of the Royal Swedish Academy of Sciences – and they will vote for the final winners.
Note that the rate-limiting step in the process is contained in the hands of the committee members. The number of members changes between each committee, but generally ranges between 6 and 8 members in each committee. And as anyone who has ever taken part of any committee discussion knows, there are usually only two or three people who really influence and shape the debate. In other words, if you want to have a real chance at winning the Nobel Prize in your field, you had best develop your connections with the most influential members of the appropriate committee.
Please note that I’m not accusing the Nobel committees of fraud or nepotism. However, we know that even the best and most reliable experts in the world are subject to human biases – sometimes without even realizing that. The human mind, after all, is a strangely convoluted place, with most of the decision making process being handled subconsciously. Individual decision makers are therefore biased by nature, as are small committees. The Nobel Laureates selection process, therefore, is biased – which I guess we all know anyway – and even worse, it remains under wraps, and the actual discussions taking place are not shared by the public for criticism.
Examples for (alleged) bias can be found easily (heck, there’s an entire Wikipedia page dedicated to the subject). Henry Eyring allegedly failed to receive the Nobel Prize because of his Mormon faith; Paul Krugman received the prize because of (again, allegedly) left-leaning bias of the committee; and when the scientist behind HPV discovery was selected to receive the prize, an anticorruption investigation followed soon after since two senior figures on the committee had strong links with a pharmaceutical company dealing with HPV vaccines.
The Wisdom of Data
Now consider the core of the Thomson Reuters process. The company’s analysts go over all the papers and citations in an automated fashion, conducted by algorithms that they define. The algorithms are only biased if they’re created that way – which means that the algorithms and the entire process will need to be fully transparent. The algorithms can cut down the list of potential candidates into a mere dozen or so – and then allow the Royal Swedish Academy do the rest of the work and vote for the top ones.
Is this process necessarily better than the committee? Obviously, many flaws still abound. The automated process could put more emphasis on charismatic ‘rock stars’ of the scientific world, for example, and neglect the more down-to-earth scientists. Or it could focus on those scientists who are incredibly well-connected and who have many collaborations, while leaving aside those scientists who only made one big impact in their field. However, proper programming of the algorithms – and accurately defining the parameters and factors behind the selection process – should take care of these issues.
Does this process, in which an automated algorithm picks a human winner, seems weird to you? It shouldn’t, because it’s happening on the World Wide Web every second. Each time you’re doing a Google search, the computer goes over millions of possible results and only shows you the ‘winners’ at the top, according to factors that include their links to each other (i.e. number of citations), the reputation of the site, and other parameters. Google has brought this selection process down to a form of art – and an accurate science.
Why not do that to the Nobel Prize as well?
.
.
.
Your Nobel Forecast
Over the next week, the recipients of the Nobel Prize will be announced one after the other. Would you like to impress your friends by forecasting the recipients? Here’s an infographic made by Thomson Reuters and detailing their forecasts for 2015. Good luck to everyone in it!
Listing of the top forecasts made by Thomson Reuters for each scientific Nobel Prize category in 2015. Originally from Thomson Reuters.
Credit: the Nobel Prize medal's image at the top of the post was taken by Adam Baker on Flickr.
Exactly fifty-eight years ago, the Soviet Union rocked history with the successful launch of Sputnik 1 – the first time for humans to contribute a satellite to Planet Earth. While Sputnik was a pretty small satellite – only 58 cm in diameter – its launch triggered the Space Race, in which U.S.A and the Soviet Union tried to impress the world with their innovations, rockets and astronauts. The Space Race came to exciting culmination with the Moon landing on 1969, with a gradual decline ever since.
Today, we are swamped with 3,000 satellites orbiting the Earth. Without these satellites, our lives would not have been as easy as they are now. According to the Union of Concerned Scientists, satellites help us forecast the weather, enable us to navigate with GPS, send television signals straight to households, and many other things. In short, they’re incredibly useful, and it’s clear that we’re now reaping the investment made during the Space Race – even though at that time, the two superpowers mainly fought over the prestige of being the first, the best and the brightest.
So today, the day in which Sputnik 1 was launched, it’s interesting to me to think about a hypothetical scenario in which another technological breakthrough occurs: a real game-changer which forces all the world’s citizens to rethink their old beliefs, and drags all the superpowers into another race. What would that scenario look like?
First, it’s clear that the world is a fair bit more cynical today than it was during the Cold War. There are no longer two market and national philosophies at war today. Capitalism has clearly won the game, at least for now. While radical religion could be presented as a rival to democracy, the only place right now where the truly radical, unapologetic expressions of religion can be found are in the Islamic State. And while ISIS has proliferated in an unbelievably rapid pace, it lacks the capacity to make new scientific and technological discoveries. And let’s say gently that they’re not really impressing the world with their contributions to the humanities or the arts.
Since the world is largely uninterested in prestige anymore, we need a technological breakthrough whose impact and consequences would be clear from the outset. What breakthrough might that be?
Free Resources from Space
There are many answers to that question, like discovering a source of free energy (possibly cold nuclear fusion), or finding a way to play with the law of gravity and change the weight of buildings and even human beings (imagine that!). However, scientific breakthroughs are often made on the shoulders of giants – i.e. they rely on plenty of previous research and past successes – and the current scientific literature does not provide us any reassurance that anyone has even gotten close to figuring out these two challenges.
So let’s opt for a more likely scenario, and imagine that sometime in the next ten years, a private firm will succeed in mining an asteroid in deep space, and will bring back to orbit sacks full of gold and platinum. We could definitely imagine this scenario becoming reality, since there are currently at least two companies (Planetary Resources and Deep Space Industries) competing between themselves to be the first to mine asteroids and bring back their riches to Earth.
Were such a venture to succeed, it would have far-reaching consequences for the future of the Earth. At the moment, the developed world relies on many precious materials that can be found mostly in developing nations. According to data from Fast Company, these materials include fluorspar (CaF2, used for high-performance optics) from Mexico, cobalt and tantalum from the Democratic Republic of the Congo, niobium (Nb, used in microcapacitors and pacemakers) from Brazil, and an estimated $1 trillion in mineral deposits in Afghanistan. These countries would essentially lose a significant part of their income, if precious materials were to be imported from space.
This chart by Visual Capitalist shows how long the resources on Earth will suffice. Please note the image here is about half of the full chart, which can be found in Venture Capitalist’s site.
The developed and powerful nations would face other difficulties. Russia, the U.S.A., China, India, Japan and the European Union have all the means necessary to start space mining themselves, and they will strive to do so as soon as possible, so that each of them can be the first to get to the ‘easiest to pick’ asteroids – the ones whose trajectories bring them closest to Earth, and contain the largest concentrations of precious metals. At the same time, they will go into overdrive developing anti-spacecraft weapons, so that they can protect their investment in space. After all, nobody wants to drag an asteroid all the way to Earth, just to have a competing nation take control over it.
A space mining race, then, is one likely result of this scenario. An alternative, though, might be found in collaboration. Deep space has plenty of asteroids waiting for mankind to mine them, and 13,000 of those asteroids have orbits that bring them close to Earth. A single platinum-rich asteroid contains 174 times the yearly world output of platinum. Perhaps pooling together humanity’s resources, then, and coordinating every nation’s efforts, would be the best way to move forward and to share the abundant wealth to come.
Conclusion
I have no idea which way the world will turn to, but one thing is clear: this scenario forces everyone to rethink their positions regarding space, and to take action of one sort or the other. No nation would be able to afford itself to stay out of the new space race, or at least out of the debate for reallocation of resources that would come for it. There shall be much gnashing of teeth and a lot of anxiety on behalf of world leaders, but in the long term this development would prove to be one of the greatest boons even bestowed on humanity, leading to an era of abundance in precious metals and materials.
Interestingly, we are already starting to consider these scenarios seriously. In a recent workshop conducted by Dr. Deganit Paikowsky and yours truly, the full impact of a similar scenario was analyzed by students who role-played the different nations of the Earth. The results of the workshop will be publicized soon, but until they do, I would love receiving feedback from you: how do you think the nations would react to this scenario? Will we see a new space race, or a joint thrust forward? And which do you think will be the most efficient and successful way for humanity as a whole?
The answers to these questions could truly shape our future.
Today, a 26 years old gunman opened fire at Oregon’s Umpqua Community College, killing at least ten people and injuring seven others. President Obama, a longtime opponent of the gun industry, immediately responded by issuing a fierce speech promoting gun regulation. While I do support a certain amount of gun regulation, it seems to me that Obama is still trying to lock the barn’s doors, long after the horses have escaped. Why am I saying that? Because even today, any person with a spare $1,000 in their bank account, would be able to print a gun for themselves.
You’ve probably heard before about 3D-printing. If you haven’t, you must’ve been hiding in a very deep cave with no WI-FI. The most simple and cheapest 3D-printers basically consist of a robotic arm that injects thin layers of plastic one on top of the other, according to a schematic that you can download from the internet. In that way, any user can print famous historical statues, spare parts for your dish washer, or a functional gun.
How easy is it to use a 3D-printer to print a gun? Much easier than it should be. When I was in Israel, I used a 3D-printer that cost approximately $1,500, in an effort to print a gun. I searched for the schematics that the Defense Distributed group devised and uploaded to the internet, and downloaded the files in less than two minutes from Pirate Bay. The printing itself took some time, and it took me some effort to stitch all the parts together, but in less than 48 hours I held in my hands a functional ghost gun of my own.
A 3D-printed gun. Credit for this image and the upper one goes to Kamenev.
Why is it called a ghost gun? Because this gun is untraceable: it’s not registered anywhere, and it has no serial number. As far as the government knows, this gun does not even exist. And I could print as many guns as I wanted, with no one being the wiser. Heck, I could stockpile them in my house for emergencies, or give them out to militias and rebel groups.
The only problem is that the printed gun I downloaded is near useless. It has a recorded tendency to explode in your hands, and is not accurate at distances of more than two meters. Obviously, it is not a fully automatic or even a semiautomatic firearm. In short, I could just as well use a metal tube with gun powder at one end, and a stone stuck at the other. So yeah, it was a pretty lousy gun, back in 2013.
But now we’re getting near the end of 2015, and things have been changing rapidly.
Consider that the original schematics for the 3D-printed gun have been downloaded more than 100,000 times in just a few days after its release to the public. Since it is open source, everyone and anyone could make changes to the schematics, leading to a wide variety of daughter-schematics, that some of them are improved versions of the first clunky gun. Combine that with the elevated capabilities of today’s printers, and the many improvements that lie in store for us, and you’ll realize that in five years from now, gun control at sales venues will be largely useless, since people will be able to print sophisticated firearms in their households.
Disarming the Future
Does that mean we should cut short any efforts for gun control in the present? Absolutely not. America is suffering from an epidemic of mass-shootings, partly because anyone can get himself or herself a deadly weapon with minimal background checks. At the same time, however, we should keep an eye out for technologies that disrupt the current gun industry, and which bring the power to manufacture firearms to the layperson.
How do we deal with such a future – which is probably a lot closer to becoming the present than most people suspect?
Here’s one answer for you: it turns out that the Oregon shooter has left a message on a social media forum this morning, warning some people not to come to school tomorrow. I’m not sure this message is the real deal, but we do know that people who commit mass-shootings leave behind evidence of their intentions in the virtual world.
Consider the following, just as anecdotes –
Eliot Rodger killed seven people in a mass-shooting in California. His Youtube videos pretty much state in advance what he was going to do.
Terence Tyler, an ex-marine who was suffering from depression, killed two of his co-workers and himself in a supermarket. Sometime before the incident he posted “Is it normal to want to kill your all your co-workers?” on Twitter twice.
Jared Loughner killed six people and wounded fourteen. Diagnosed as a paranoid schizophrenic, he wrote “Please don’t be mad at me” in Myspace, and took photos of himself with his trusty rifle in the morning of the shooting.
These are obviously just anecdotes, but they serve to highlight the point: everyone, even mass-killers, want to be noticed, to deliver their message to the public, or to share their intimate thoughts and anguish. Their musings, writings and interactions can all be found in the virtual world, where they are recorded for eternity – and can be analyzed in advance by sophisticated algorithms that can detect potential walking disasters.
While this sentence is rapidly becoming cliché, I must say it again: “This is NOT science fiction”. Facebook is already running algorithms over every chat, and is looking for certain dangerous phrases or keywords that could indicate a criminal intent. If it discovers potential criminals, Facebook alerts the authorities. Similarly, Google is scanning images sent via Gmail to identify pedophiles.
Obviously, identifying individuals that answer to the right (or very wrong) combination of declarations, status in life and other parameters could be a complicated task, but we’re starting at it today – and in the long run, it will prove to be more effective than any gun control regulation we can pass.
And so, here’s my forecast for the day: ten years from now, the president of the United States will stand in front of the camera, and explain that he needs the public’s support in order to pass laws that will enable governmental algorithms to go automatically and constantly over everyone’s information online – and identify the criminals in advance.
The alternative is that this future president won’t even ask for permission – and that should frighten us all so much more.