One of the episodes which the United States Post Service (USPS) is trying to sweep under the rug, is their attempt to use rocket mail. i.e. send mail on actual rockets. Oh, and you might want to know that the first rocket to be used that way by the USPS was a nuclear missile.
The idea of rocket mail was originally developed in germany in the 19th century, but never really took off. However, as missile technology improved, the USPS took note of the idea and decided to give it a shot. And so, the first official American rocket mail was launched in 1959.
The USPS chose for that purpose a Regulus cruise missile armed with a nuclear warhead. They stripped took the warhead off, and replaced it with mail containers that were supposed to withstand the impact when the missile hit the ground. The missile was launched from Virginia and reached its destination in Florida in just 22 minutes. Since the two states are around 700 miles apart (~1,200 km) that means the mail got to its destination at a speed of around 3,500 km/h. That’s pretty impressive for mail delivery.
The US Postmaster General got so excited that he publicly claimed that the event is –
“of historic significance to the peoples of the entire world”
and that –
“Before man reaches the moon, mail will be delivered within hours from New York to California, to Britain, to India or Australia by guided missiles. We stand on the threshold of rocket mail.”
Except, as we know very well now, they really didn’t stand on the threshold of rocket mail. The costs of rocket mail were too high – particularly for the infrastructure involved, which included the launch systems and the missile itself which couldn’t be reused. What’s more, at the same time that rocket mail was first attempted, international air travel became dramatically cheaper, so that important packages could easily be delivered in just a single day over the ocean, with no need for rockets of any kind.
Rocket mail is an historic invention which many futurists should consider whenever they gush about new technologies taking over the world. In the end, it all comes down to cost, and if that new technology is more expensive than what you currently use – or even from other technologies that are being developed at the same time – it probably won’t be used after all.
But at least the idea of rocket mail finally found use in Mission Impossible II after all – where Ethan Hunt’s sunglasses are being delivered to him via a rocket. The US Postmaster General can be proud indeed.
A while ago I’ve written in this blog about flying cars, and how we should start seeing them in our sky en masse towards 2035. It’s always nice to check on such forecasts and see how they’re progressing along and are reinforced by recent events. So here’s an update, composed of two recent news from April: one of them is basically an eye candy, while the other could be a serious indicator that flying cars are afoot (pun fully intended).
The Eye Candy
Let’s open with the pretty and shiny stuff. It turns out an aerial innovator has just flown his own invention, the Flyboard Air, a whooping distance of 2,252 meters. He basically smashed through the old record of 275 meters, going at a height of 30 meters above water, at a top speed of around 70 km/h. That’s an impressive achievement!
Unfortunately, it doesn’t mean anything for a future of flying cars.
The main reason for my lack of enthusiasm is that the hoverboard is powered by jet fuel – A1 kerosene carried on the user’s back. As long as flying cars are powered by conventional fossil fuels, they won’t find their way into common use. Flying simply takes too much energy, and fossil fuels are too expensive and harmful to the environment to be used to power such wasteful activity. The only flying cars that have a chance to succeed are ones that operate on electricity, and that’s only if we assume that electricity is about to become abundant due to the exponential rise in solar energy use.
So this is probably just another pretty invention, but when such inventions appear on the market one after the other, one starts to see a trend. You can’t ignore the fact that aerial drones capable of carrying a human passenger begin to appear more and more on the news. Will all these innovations lead to an actual flying taxi service? Only if the two conditions I specified in the original post about flying cars come true: they need to be electric, and they need to be autonomous so that you don’t have an expensive (and prone to mistakes) human pilot.
The Flying Taxis of the Future
In the last two months, exciting things have happened for e-volo: the manufacturer of the world’s first certified Multicopter (i.e. a helicopter with multiple rotors).
The Multicopter has received a permit to fly from the German authorities in February 2016. The certified Multicopter’s first manned flight took place at the end of March, and ended with absolutely no issues. The pilot controlled the vehicle easily with a single joystick, and the Multicopter was stable and autonomous enough to retain its position automatically even when the pilot released his hand from the joystick.
The vehicle can reach a speed of up to 100 km/h, with 18 rotors powered by nine independent batteries, and a 450 kg take-off weight. The large number of rotors and batteries means that even if one of them fails, the Multicopter can still stay high in the air. Since the Multicopter relies on electric motors, it is one of the top candidates in the race to become the world’s first air taxi.
Which is exactly what e-volo, the company behind the Multicopter, is trying to do.
According to ASM International, e-volo is looking to create a new market of air taxi services. In the short term, they plan to use the personal vehicles on certain predetermined routes, where there will be no chance for collision. In the medium term, however, they are already thinking about providing the vehicles with autonomous capabilities, so that they will be able to go any way the passenger chooses. The passenger will pick the destination, and the AI will make sure that the air taxi brings him there safely.
There are encouraging indicators that air taxi services will indeed become reality by 2035, but the obstacles are still out there. We still need to develop more reliable personal aircrafts with improved autonomous functions. Also, electric flying vehicles will still require an abundance of energy for mass-scale use, and such energy will have to come from an abundant source: the Sun. That means we’ll have to keep an eye for developments in solar energy harvesting as well. Luckily, solar energy is moving forward at an exponential rate.
So, if everything comes together just right, I still stand by my original forecast: flying taxis by 2035 it is!
Solar panels are undergoing rapid evolution in the last ten years. I’ve written about this in previous posts in the blog (see for example the forecast that we’ll have flying cars by 2035, which is largely dependent on the sun providing us with an abundance of electricity). The graph below is pretty much saying it all: the cost for producing just one watt of solar energy has gone down to somewhere between 1 percent and 0.5 percent of what it used to be just forty years ago.
At the same time that prices go down, we see more installations of solar panels worldwide, roughly doubling every 2-3 years. Worldwide solar capacity in 2014 has been 53 times higher than in 2005, and global solar photovoltaic installations grew 34% in 2015 according to GTM Research.
It should come as no surprise that regulators are beginning to take note of the solar trend. Indeed, two small California cities – Lancastar and Sebastopol – passed laws in 2013 requiring new houses to include solar panels on their roofs. And now, finally, San Francisco joins the fray as the first large city in the world to require solar panels on every new building.
San Francisco has a lofty goal: meeting all of its energy demands by 2025, using renewable sources only. The new law seems to be one more step towards that achievement. But more than that, the law is part of a larger principle, which encompasses the Internet of Things as well: the Activation of Everything.
The Activation of Everything
To understand the concept of the Activation of Everything, we need to consider another promising legislation that will be introduced soon in San Francisco by Supervisor Scott Wiener. Supervisor Wiener is allowing solar roofs to be replaced with living roofs – roofs that are covered with soil and vegetation. According to a 2005 study, living roofs reduce cooling loads by 50-90 percent, and reduce stormwater waste and runoff to the sewage. They retain much of the rainwater, which later goes back to the atmosphere through evaporation. They enhance biodiversity, sequester carbon and even capture pollution. Of course, not every plant can be grown efficiently on such roofs – particularly not in dry California – but there’s little doubt that optimized living roofs can contribute to the city’s environment.
Supervisor Wiener explains the reasons behind the solar power legislation in the following words –
“This legislation will activate our roofs, which are an under-utilized urban resource, to make our City more sustainable and our air cleaner. In a dense, urban environment, we need to be smart and efficient about how we maximize the use of our space to achieve goals like promoting renewable energy and improving our environment.”
Pay attention to the “activate our roofs” part. Supervisor Wiener is absolutely right in that the roofs are an under-utilized urban resource. Whether you want to use those roofs to harvest solar power or to grow plants and improve the environment, the idea is clear. We need to activate – in any means possible – our resources, so that we maximize their use.
That is what the Activation of Everything principle means: activate everything, whether by allowing surfaces and items to harvest power or resources, or to have sensing and communication capabilities. In a way, activation can also mean convergence: take two functions or services that were performed separately in the past, and allow them to be performed together. In that way, a roof is no longer just a means to provide shade and protection from the weather, but can also harvest energy and improve the environment.
The Internet of Things is a spectacular example for implementing the Activation of Everything principle. In the Internet of Things world, everything will be connected: every roof, every wall, every bridge and shirt and shoe. Every item will be activated to have added purposes. Our shirts will communicate our respiration rate to our physicians. Bricks in walls will report on their structural integrity to engineers. Bridges will let us know that they’re close to maximum capacity, and so on.
The Internet of Things largely relies on sophisticated electronic technologies, but the Activation of Everything principle is more general than that. The Activation of Everything can also mean creating solar or living roofs, or even creating walls that include limestone-secreting bacteria that can fix cracks as soon as they form.
Where else can we implement the Activation of Everything principle in the future?
The Activation of Cars
There have been many ideas to create roads that can harvest energy from cars’ movements. Unfortunately, the Laws of Thermodynamics reveal that such roads will in fact ‘steal’ that energy from passing cars, by making it more difficult for them to travel along the road. Not a good idea. The activation of roofs works well specifically because it has a good ROI (Return on Investment), with a relatively low energetic investment and large returns. Not so with energy-stealing roads.
But there’s another unutilized resource in cars – the roof. We can use the Activation principle to derive insights about the future of car roofs: hybrid cars will be covered with solar panels, which will be used to harvest energy when they’re sitting in the parking lot, and store it for the ride home.
Don’t get the math wrong: cars with solar roofs won’t be able to drive endlessly. In fact, if they rely only on solar power, they’ll barely even crawl. However, they will be able to power the electrical devices in the car, and trucks may even use solar energy on long journeys, to cool the wares they carry. If the cost of solar panel installation continues to go down, these uses could be viable within the decade.
The Activation of Farmlands
Farmlands are being activated today in many different ways: from sensors all over the field, and sometimes in every tree trunk, to farmers supplementing their livelihood by deploying solar panels and ‘farming electricity’. Some are combining both solar panels and crop and animal farming by spreading solar panels at a few meters height above the field, and growing plants that can make the most of the limited sunlight that gets to them.
The Activation of the Air
Even the air around us can be activated. Aerial drones may be considered an initial attempt to activate the sky by filling them with flying sensors, but they are large, cumbersome and interfere with aerial traffic and with the view. However, we’ll be able to activate air in various other ways in the future, such as smart dust – extremely small sensors with limited wireless connectivity that will transmit data about their whereabouts and the conditions there.
The Activation of Food
Food is one of the only things that have barely been activated so far. Food today serves only two goals: to please by tasting great, and to nourish the body. According to the principle of Activation, however, food will soon serve several other purposes. Food items could be used to deliver therapeutics or sensors into the body, or possibly be produced with built-in biocompatible electronics and LEDs to make the food look better on the plate.
As human beings, we’ve always searched for ways to optimize efficiency and to make the best use of the limited resources we have. One of those limited resources is space, which is why we try to activate – add functions – to every surface and item today.
It’s fascinating to consider how the Activation of Everything will shape our world in the next few decades. We will have sensors everywhere, solar panels everywhere, batteries and electronics everywhere. It will be a world where nothing is as it seems at first glance anymore. An activated world – a living world indeed.
For a long time now, scientists were held in thrall by publishers. They worked voluntarily – without getting any pay – as editors and reviewers for the publishers, and they allowed their research to be published in scientific journals without receiving anything out of it. No wonder that scientific publishing had been considered a lucrative business.
Well, that’s no longer the case. Now, scientific publishers are struggling to maintain their stranglehold over scientists. If they succeed, science and the pace of progress will take a hit. Luckily, the entire scientific landscape is turning on them – but a little support from the public will go a long way in ensuring the eventual downfall of an institute that is no longer relevant or useful for society.
To understand why things are changing, we need to look back in history to 1665, when the British Royal Society began publishing research results in a journal form called Philosophical Transactions of the Royal Society. Since the number of pages available in each issue was limited, the editors could only pick the most interesting and credible papers to appear in the journal. As a result, scientists from all over Britain fought to have their research published in the journal, and any scientist whose research was published in an issue gained immediate recognition throughout Britain. Scientists were even willing to become editors for scientific journals, since that was a position that demanded request – and provided them power to push their views and agendas in science.
Thus was the deal struck between scientific publishers and scientists: the journals provided a platform for the scientists to present their research, and the scientists fought tooth and nail to have their papers accepted into the journals – often paying from their pockets for it to happen. The journals publishers then had full copyrights over the papers, to ensure that the same paper would not be published in a competing journal.
That, at least, was the old way for publishing scientific research. The reason that the journal publishers were so successful in the 20th century was that they acted as aggregators and selectors of knowledge. They employed the best scientists in the world as editors (almost always for free) to select the best papers, and they aggregated together all the necessary publishing processes in one place.
And then the internet appeared, along with a host of other automated processes that let every scientist publish and disseminate a new paper with minimal effort. Suddenly, publishing a new scientific paper and making the scientific community aware of it, could have a radical new price tag: it could be completely free.
Let’s go through the process of publishing a research paper, and see how easy and effortless it became:
The scientist sends the paper to the journal: Can now be conducted easily through the internet, with no cost for mail delivery.
The paper is rerouted to the editor dealing with the paper’s topic: This is done automatically, since the authors specify certain keywords which make sure the right editor gets the paper automatically to her e-mail. Since the editor is actually a scientist volunteering to do the work for the publisher, there’s no cost attached anyway. Neither is there need for a human secretary to spend time and effort on cataloguing papers and sending them to editors manually.
The editor sends the paper to specific scientific reviewers: All the reviewers are working for free, so the publishers don’t spend any money there either.
Let’s assume that the paper was confirmed, and is going to appear in the journal. Now the publisher must:
Paginate, proofread, typeset, and ensure the use of proper graphics in the paper: These tasks are now performed nearly automatically using word processing programs, and are usually handled by the original authors of the paper.
Print and distribute the journal: This is the only step that costs actual money by necessity, since it is performed in the physical world, and atoms are notoriously more expensive than bits. But do we even need this step anymore? I have been walking around in the corridors of the academy for more than ten years, and I’ve yet to see a scientist with his nose buried in a printed journal. Instead, scientists are reading the papers on their computer screens, or printing them in their offices. The mass-printed version is almost completely redundant. There is simply no need for it.
In conclusion, it’s easy to see that while the publishers served an important role in science a few decades ago, they are just not necessary today. The above steps can easily be conducted by community-managed sites like Arxive, and even the selection process of high quality papers can be performed today by the scientist themselves, in forums like Faculty of 1000.
The publishers have become redundant. But worse than that: they are damaging the progress of science and technology.
The New Producers of Knowledge
In a few years from now, the producers of knowledge will not be human scientists but computer programs and algorithms. Programs like IBM’s Watson will skim through hundreds of thousands of research papers and derive new meanings and insights from them. This would be an entirely new field of scientific research: retrospective research.
Computerized retrospective research is happening right now. A new model in developmental biology, for example, was discovered by an artificial intelligence engine that went over just 16 experiments published in the past. Imagine what would happen when AI algorithms cross and match together thousands papers from different disciplines, and come up with new theories and models that are supported by the research of thousands of scientists from the past!
For that to happen, however, the programs need to be able to go over the vast number of research papers out there, most of which are copyrighted, and held in the hands of the publishers.
You may say this is not a real problem. After all, IBM and other large data companies can easily cover the millions of dollars which the publishers will demand annually for access to the scientific content. What will the academic researchers do, though? Many of them do not enjoy the backing of the big industry, and will not have access to scientific data from the past. Even top academic institutes like Harvard University find themselves hard-pressed to cover the annual costs demanded by the publishers for accessing papers from the past.
Many ventures for using this data are based on the assumption that information is essentially free. We know that Google is wary of uploading scanned books from the last few decades, even if these books are no longer in circulation. Google doesn’t want to be sued by the copyrights holders – and thus is waiting for the copyrights to expire before it uploads the entire book – and lets the public enjoy it for free. So many free projects could be conducted to derive scientific insights from literally millions of research papers from the past. Are we really going to wait for nearly a hundred years before we can use all that knowledge? Knowledge, I should mention, that was gathered by scientists funded by the public – and should thus remain in the hands of the public.
What Can We Do?
Scientific publishers are slowly dying, while free publication and open access to papers are becoming the norm. The process of transition, though, is going to take a long time still, and provides no easy and immediate solution for all those millions of research papers from the last century. What can we do about them?
Here’s one proposal. It’s radical, but it highlights one possible way of action: have the government, or an international coalition of governments, purchase the copyrights for all copyrighted scientific papers, and open them to the public. The venture will cost a few billion dollars, true, but it will only have to occur once for the entire scientific publishing field to change its face. It will set to right the ancient wrong of hiding research under paywalls. That wrong was necessary in the past when we needed the publishers, but now there is simply no justification for it. Most importantly, this move will mean that science can accelerate its pace by easily relying on the roots cultivated by past generations of scientists.
If governments don’t do that, the public will. Already we see the rise of websites like Sci-Hub, which provide free (i.e. pirated) access to more than 47 million research papers. Having been persecuted by both the publishers and the government, Sci-Hub has just recently been forced to move to the Darknet, which is the dark and anonymous section of the internet. Scientists who will want to browse through past research results – that were almost entirely paid for by the public – will thus have to move over to the Darknet, which is where weapon smugglers, pedophiles and drug dealers lurk today. That’s a sad turn of events that should make you think. Just be careful not to sell your thoughts to the scholarly publishers, or they may never see the light of day.
Dr Roey Tzezana is a senior analyst at Wikistrat, an academic manager of foresight courses at Tel Aviv University, blogger at Curating The Future, the director of the Simpolitix project for political forecasting, and founder of TeleBuddy.
When I first read about the invention of the Right Cup, it seemed to me like magic. You fill the cup with water, raise it to your mouth to take a sip – and immediately discover that the water has turned into orange juice. At least, that’s what your senses tell you, and the Isaac Lavi, Right Cup’s inventor, seems to be a master at fooling the senses.
Lavi got the idea for the Right Cup some years ago, when he was diagnoses with diabetes at the age of 30. His new condition meant that he had to let go of all sugary beverages, and was forced to drink only plain water. As an expert in the field of scent marketing, however, Lavi thought up of a new solution to the problem: adding scent molecules to the cup itself, which will trick your nose and brain into thinking that you’re actually drinking fruit-flavored water instead of plain water. This new invention can now be purchased on Indiegogo, and hopefully it even works.
“My two diabetic parents are drinking from this cup for the last year and a half.” Lavi told me in an e-meeting we had last week, “and I saw that in taste testing in preschool, kids drank from these cups and then asked for more ‘orange juice’. And I told myself that – Wow, it works!”
What does the Right Cup mean for the future?
A Future of Nano-technology
First and foremost, the Right Cup is one result of all the massive investments in nano-technology research made in the last fifteen years.
“Between 2001 and 2013, the U.S. federal government funneled nearly $18 billion into nanotechnology research… [and] The Obama administration requested an additional $1.7 billion for 2014.” Writes Martin Ford in his 2015 book Rise of the Robots. These billions of dollars produced, among other results, new understandings about the release of micro- and nano-particles from polymers, and the ways in which molecules in general react with the receptors in our noses. In short, they enabled the creation of the Right Cup.
There’s a good lesson to be learned here. When our leaders justified their investments in nano-technology, they talked to us about the eradication of cancer via drug delivery mechanisms, or about bridges held by cobwebs of carbon nanotubes. Some of these ideas will be fulfilled, for sure, but before that happens we might all find ourselves enjoying the more mundane benefits of drinking Illusory orange-flavored water. We can never tell exactly where the future will lead us: we can invest in the technology, but eventually innovators and entrepreneurs will take those innovations and put them to unexpected uses.
All the same, if I had to guess I would imagine many other uses for similar ‘Right Cups’. Kids in Africa could use cups or even straws which deliver tastes, smells and even more importantly – therapeutics – directly to their lungs. Consider, for example, a ‘vaccination cup’ that delivers certain antigens to the lungs and thereby creates an immune reaction that could last for years. This idea brings back to mind the Lucky Iron Fish we discussed in a previous post, and shows how small inventions like this one can make a big difference in people’s lives and health.
A Future of Self-Reliance
It is already clear that we are rushing headlong into a future of rapid manufacturing, in which people can enjoy services and production processes in their households that were reserved for large factories and offices in the past. We can all make copies of documents today with our printer/scanner instead of going to the store, and can print pictures instead of waiting for them to be developed at a specialized venue. In short, technology is helping us be more geographically self-reliant – we don’t have to travel anymore to enjoy many services, as long as we are connected to the digital world through the internet. The internet provides information, and end-user devices produce the physical result. This trend will only progress further as 3D printers become more widespread in households.
The Right Cup is another example for a future of self-reliance. Instead of going to the supermarket and purchasing orange juice, you can buy the cup just once and it will provide you with flavored water for the next 6-9 months. But why stop here?
Take the Right Cup of a few years ahead and connect it to the internet, and you have the new big product: a programmable cup. This cup will have a cartridge of dozens of scent molecules, each of which can be released at different paces, and in combination with the other scents. You don’t like orange-flavored water? No problem. Just connect the cup to the World Wide Web and download the new set of instructions that will cause the cup to release a different combination of scents so that your water now tastes like cinnamon flavored apple cider, or any other combinations of tastes you can think of – including some that don’t exist today.
A Future of Disruption?
As with any innovation and product proposed on crowdfunding platforms, it’s difficult to know whether the Right Cup will stand up to its hype. As of now the project has received more than $100,000 – more than 200% of the goal they put up. Should the Right Cup prove itself taste-wise, it could become an alternative to many light beverages – particularly if it’s cheap and long-lasting enough.
Personally, I don’t see Coca-Cola, Pepsi and orchard owners going into panic anytime soon, and neither does Lavi, who believes that the beverage industry is “much too large and has too many advertising resources for us to compete with them in the initial stages.” All the same, if the stars align just right, our children may opt to drink from their Right Cups instead of buying a bottle of orange juice at the cafeteria. Then we’ll see some panicked executives scrambling around at those beverages giants.
It’s still early to divine the full impact the Right Cup could have on our lives, or even whether the product is even working as well as promised. For now, we would do well to focus only on previously identified mega-trends which the product fulfills: the idea of using nano-technology to remake everyday products and imbue them with added properties, and the principle of self-reliance. In the next decade we will see more and more products based on these principles. I daresay that our children are going to be living in a pretty exciting world.
Disclaimer: I received no monetary or product compensation for writing this post.
Two days ago, the International Agency for Research on Cancer, part of the World Health Organization (WHO) released a statement that’s probably still causing the meat industry leaders to quiver in their star-studded boots. The agency has convened together a working group of 22 experts, who reviewed more than 800 studies on the association between cancer and red and processed meat. The final results were phrased unequivocally: eating just 50 grams of processed meat every day makes one 18% more likely to develop colorectal (bowel) cancer.
The obvious question that rises now deals with the future of meat eating. Are we about to see the demise of hamburger joints? Is McDonald’s about to go down in flames, along with its beef patties?
Probably not, at least in the short term, for a few reasons.
Reasons for Meat to Stay
I divide the reasons that meat will remain in culture into two different categories, each coming from a different audience: the reactions in the public, and the innovations coming from start-ups.
Will the public forego meat? That is one possible outcome, but it seems extremely radical in the short term. Even now, articles in journals and magazines bring sense and nuances into the WHO’s declaration: they explain that while an 18% increased chance to develop cancer sounds frightening, the actual numbers are much more nuanced. When Cancer Research UK crunched the numbers, it found out that –
“…out of every 1000 people in the UK, about 61 will develop bowel cancer at some point in their lives. Those who eat the lowest amount of processed meat are likely to have a lower lifetime risk than the rest of the population (about 56 cases per 1000 low meat-eaters).”
Now, that sounds much less scary, doesn’t it?
The articles also explain the rationale behind the WHO’s five categories of potential cancer-inducing agents and chemicals. In Group 1 you can find the agents that the experts are certain of their potential to cause cancer, but there is no distinction between the different levels of harm caused by each substance! That means that tobacco and processed meat exist side by side in Group 1, even though smoking kills more than one million people every year, whereas processed meat kills ‘only’ 34,000 people every year. And guess what? People are still smoking, with 17.8% of all U.S. adults smoking cigarettes!
And that leads us to another matter: people are willing to do things that are harmful to them in the long run. We go out to the sun, even though the sun’s radiation is also in the Group 1. Women take contraceptives to make sure they do not get pregnant – despite the known increased risk of cancer. And of course, 51.9% of all Americans aged 12 or older consume alcohol, even though the ethanol in the drink has also been shown to cause cancer. So you’ll pardon me if I don’t stop investing in meat production anytime soon (figuratively, since I don’t invest in the stock market; I’m a wary futurist).
All of the above does not mean that we won’t let go of meat eventually, in the long term. But at least in the short term, much more needs to happen in order to make people radically change their dietary habits. Culture, as you may remember from a previous post about pace-layer analysis, is very slow indeed to change.
The New Meat Start-Ups
Whenever human beings run into a wall that stands in the way of their desires, they either break it down or find ways to go around it. The most obvious solution in this case would be to develop new kinds of cooking and preservation methods for meat that do not involve the dangerous chemicals highlighted by the WHO. We can expect to see hamburger joints coming up with hamburgers made from unprocessed meat, possibly with an emphasis on freshness. And since it seems that barbecuing the meat can also cause cancer, other types of dishes like goulash might gain popularity in place of steaks.
While I don’t know what innovations will come up in the meat industry, I feel confident that they will arrive. Where there is great need, there is also great money – and innovators go where the money is.
Even in the face of the WHO’s declaration, there doesn’t seem to be much of a chance that people will stop eating meat anytime soon. Note the emphasis on “soon”. It is entirely possible that a movement will rise out of this declaration, and urge people to let go of meat altogether. Such a movement will probably base itself on panic-mongering, distorting the evidence to lead people to the belief that all meat is bad for them. But even this kind of a movement will take time to develop and gather political and social power, which means the meat industry probably still has at least one generation’s lifetime – twenty years – to survive. Whether you like this assessment or not depends on your previous beliefs.
I would like to draw attention to one last issue at steak (pardon the pun). The WHO’s committee reported that – “The most influential evidence came from large prospective cohort studies conducted over the past 20 years.” This innocent comment reveals once again the importance of conducting research and collecting data long into the future. Most research today only lasts as long as it takes the student obtain his or her graduate degree, which makes it very difficult to collect data over time.
This is a topic for another post, really, so for now I’ll just end by saying that there is a very real need to support and fund lengthier research. Research that lasts decades provides the best evidence about the impact of nutrition and lifestyle over our lives, and it should be encouraged in the scientific community.
A few days ago I decided that I wanted a new business card for the up and coming new year. I headed straight to Fiverr, and browsed through some of the graphic designers who offered their services for five dollars or more. After a few minutes, my choice was made: I decided to use the designer with more than a hundred of 5-star positive ratings, and literally no negative reviews at all.
Of course, the gig didn’t really cost five dollars. I added $10 to receive the source file as well, $5 for the design of a double-sided business card, and $5 for a “more professional work”, as the designer put it. Along with other bits, the gig cost $30 altogether, which is still a good price to pay for a well-designed card.
Then the troubles began.
I received the design in 24 hours. It was, simply put, nowhere near what I expected. The fonts were all wrong, the colors were messed up, and worst of all – the key graphical element in front of the card was not centralized properly, which indicates to me a lack of attention to details that is outright unprofessional. So I asked for a modification, which was implemented within a day. It was not much better than the original. At which point I thanked the designer, and concluded the gig with a review of her work. I gave her a rating of generally three stars – possibly more than I felt that her skills warrant, and wrote a review applauding her effort to fix things, but also mentioned that I was not satisfied with the final result.
An hour later, the designer sent me a special plea. She asked me, practically in virtual tears, to remove my review, telling me that we can cancel the order and go to our separate ways. She told me that her livelihood depends on Fiverr, and without high ratings, she would not be approached by other buyers in the future.
I knew that my money would not actually be returned to me, since Fiverr only deposits the return in your Fiverr account for the next gigs you will purchase from them.But seeing a maiden so distraught, and me having an admittedly soft heart, I decided to play the gallant knight and deleted my negative review.
And so, I betrayed the community, and added to the myth of Fiverr.
Lessons for the No-Managers Workplace
In December 2011, the management guru Gary Hamel published an intriguing piece in the Harvard Business Review called “First, Let’s Fire All the Managers”. In the article, Hamel described a wildly successful company – The Morning Star Company – based on a model that makes managers unnecessary. The workers regulate themselves, criticize each other’s work, and deliberate together on the course of action their department should take. Simply put, everyone is a manager in Morning Star, and no one is.
You should read the article if this interests you (and it should), but just to sum up – Morning Star has some 400 workers, so it’s not a small start-up, and the model it’s using could definitely be scaled-up for much larger companies. However, Hamel included a few admonishments, the first of which was the need for accountability: the employees in Morning Star must “deliver a strong message to colleagues who don’t meet expectations,” wrote Hamel. Otherwise, “self-management can become a conspiracy of mediocrity.”
The employees in Morning Star receive special training to make sure they understand how important it is that they provide criticism and feedback to other employees, and that they actually hurt all the other employees if such feedback is not provided and made public. Apparently the training works, since Morning Star has been steadily growing over the past few decades, while leaving its competitors far behind. In fact, today “Morning Star is the world’s largest tomato processor, handling between 25% and 30% of the tomatoes processed each year in the United States.”
Morning Star is a shining example for a no-managers workplace which actually works in a competitive market, since each person in the firm makes sure that others are doing their jobs properly.
But what happens in Fiverr?
Is Fiverr Broken?
I have no idea how many service providers on Fiverr beg their customers for high ratings. I have a feeling that it happens much more frequently than it should, and that soft-hearted customers like me (and probably you too) can become at least somewhat swayed by such passionate requests. The result is that some service providers on Fiverr will enjoy a much higher rating than they deserve – which will in effect deceive all their future potential customers.
Fiverr could easily take care of this issue, by banning such requests for high rating, and setting an algorithm that screens all the messages between the client and the service provider to identify such requests. But why should Fiverr do that? Fiverr profits from having the seemingly best designers on the web, with an average of a five stars rating! Moreover, even in cases where the customer is extremely ticked off, all that will happen is that the service provider won’t get paid. Fiverr keeps the actual money, and only provides recompensation by virtual currency that stays in the Fiverr system. This is a system, in short, in which nobody is happy, except for Fiverr: the customer loses money and time, and the service provider loses money occasionally and gets no incentive or real feedback that will make him or her improve in the long run.
As I wrote earlier, Fiverr could easily handle this issue. Since they do not, I rather suspect they like the way things work right now. However, I believe that sooner or later they will find out that they have garnered themselves a bad reputation, which will keep future customers away from their site. We know that great start-ups that have received a large amount of funding and hype, like Quirky, have toppled before because of inherent problems in their structures. I hope Fiverr would not fail in a similar fashion, simply because it doesn’t bother to winnow the bad apples from its orchard.
Yesterday I suggested a scenario about the Skarp laser razor campaign, in which the new device disrupts the current shaving industry giants. Well, that was yesterday. Less than 24 hours after I published the piece in this blog, Kickstarter suspended (a polite word for “dumped”) the project. The people behind Skarp jumped ship immediately to Indiegogo, and seem to be doing quite well in there – gathering approximately $10,000 every hour, for the past ten hours.
There have been several accusations by so-called experts and professional experts in the field of lasers and physics, regarding the feasibility of the laser razor. And yet, the suspension by Kickstarter was formally because of a very different reason: it turns out the Skarp team did not have a working prototype. Or maybe they did, but it was working so haphazardly that it could not be used for actual shaving.
So what’s going on here? Don’t the folks at Kickstarter consult experts before they agree to take up projects that may be physically impossible?
I believe they do not, and that’s generally a good thing.
In order to understand why I say so, let’s first try to see what purpose Kickstarter and crowdfunding platforms as a whole serve in society.
The Three Steps of Innovation
We often hear of the entrepreneur who had an amazing idea. A truly breathtaking invention formed in his mind, and he immediately proceeded to make it a reality, earning himself a few billion dollars and a vacation in the Bahamas on the way.
That, at least, is the myth.
In reality, innovation is based on three distinct steps:
Recombination of existing concepts into many new ideas;
Finding out which ideas are good, and which aren’t;
Rapidly iterating a good idea until it becomes an excellent one.
The Polymerase Chain Reaction (PCR) is an example for a unique recombination of existing concepts that changed the world. The PCR device is used in nearly every biological lab as part of the work needed to sequence DNA, to create new DNA strands, and genetically engineer bacteria, plants and even human cells. The technique was invented by Kary Mullis, who won the 1993 Nobel Prize in Chemistry for it, ‘simply’ by recombining existing techniques and automating them to a degree.
Many other winning inventions are in fact a recombination of existing ideas. Facebook, for example, relies on the recombination of a social network, the World Wide Web, smartphones, image and video storing, hashtags, and many others. Similarly, autonomous (driverless) cars are a recombination of computers, sensors, image processing, GPS, etc.
Since we’re constantly innovating, dozens (and sometimes hundreds and thousands) of new ideas are being added to the mix every year, and entrepreneurs are trying to recombine them in different and exciting ways to create new inventions. This is the first step of innovation: the frantic recombination of existing ideas by inventors from around the world.
The only problem is, most of these new inventions are, well, rubbish.
In his book “How to Fly a Horse”, Kevin Ashton (the inventor who gave the Internet of Things its name) details what happens to newly patented inventions in at least one firm – Davison Design. For the past 23 years, Davison mainly took money from customers to register their patents. Overall, its revenues equaled $45 million a year, with an average of 11,000 people signing with the company. How many people actually made any money from their patents and inventions? Altogether, only 27 people have seen any money out of their patents. The statistics, in short, are grim for any inventor. You may think the market is eager to use your new idea, but you can never tell for certain until the product is actually on the market. In fact, Shinkhar Ghosh from Harvard Business School has discovered that, “About three-quarters of venture-backed firms in the U.S. don’t return investors’ capital”. So nobody knows which idea is going to be any good: not even the big venture capitalists who invest millions of dollars in those ideas.
This is where the second part of innovation comes in: we have to winnow the good ideas from the bad ones. In the past, this function was only performed by government grants and investors. Distinguished committees would go over hundreds and thousands of idea submissions, and select the ones that seemed to have the best chance for success. Unfortunately, such committees are hard-pressed to support all the applicants, and as a result, 98-99% of ideas are refused funding.
Consider, on the other hand, Kickstarter and other crowdfunding platforms. In Kickstarter alone, 43% of campaigns reach their goals and obtain the money they needed to make their vision a reality. In a way, crowdfunding allows inventors to test their ideas: does the public want this new invention? Is it any good? Are people willing to pay for it… even before the factories have received the million dollar contract to manufacture all the parts?
In that way, crowdfunding platforms enable innovation by streamlining the second step: distinguishing the good ideas from the bad ones. And once a good idea has been found and supported – whether it’s an ice chest with a USB charger, or a pillow that covers the user’s head completely – the inventor keeps upgrading and changing the product so that it becomes better with each iteration. This is the reason that iPhone 6S is so much better than the original iPhone.
Innovation is the steppingstone on which our modern day society is built. Innovation leads to increased productivity, and as Paul Krugman says – “Productivity isn’t everything, but in the long run it is almost everything.” Innovative new companies are responsible for the majority of new jobs in the United States, and innovative ‘crazy’ ideas – the kind only few dared to support when they were originally proposed, like Airbnb or Google – have led to wholesale changes in the way society behaves.
Today’s new Google or Airbnb would not have had to look for elite investors: they could’ve went to the crowdfunding platforms to ask for assistance, and their chances would’ve been much higher to receive funding, at least in principle.
That is why Kickstarter is so important for innovation and for modern society: it allows the public to support many more innovators than ever before. And while quite a few of them are going to fail (probably most of them), the ones who make the big breakthroughs are going to change society. At the very least, even the fluked campaigns show the rest of us the value of some ideas. Overall, crowdfunding platforms move society forward.
The Bad Apples
“That is all just swell,” you might say now, “but how can we be sure that the projects on Kickstarter are not a scam? How can we know for sure that the Skarp laser razor isn’t a scam? The experts were all against it!”
Well, here’s a newsflash: when it comes to innovation, you can’t always rely on the experts.
There are plenty of examples that support this statement. Both Lord Kelvin (noted British Physicist) and the great astronomer Simon Newcomb dismissed any attempt to build a heavier-than-air flying machine, a mere two years before the Wright brothers demonstrated the first successful airplane. The British Royal Astronomer Richard van der Reit Wooley has declared confidently that “Space travel is utter bilge” – one year before Sputnik orbited the Earth. In fact, experts are wrong so often about the limits of possibility, that Arthur C. Clarke has issued his First Law about them –
In short, experts can be wrong, too, even in matters as rigid as the laws of nature and the ways we can manipulate them. And it is so much easier to get social developments and innovations wrong, since there is no perfect model of the human mind or of society. And thus, no expert would’ve forecast with certainty that people will upload their photos so that millions can see them (Facebook, Flickr, Instagram), or share their houses (Airbnb) and cars (Uber) with total strangers. And yet, these innovative start-ups made it into existence, and changed the world.
That does not mean, of course, that the public should support every wily promise on Kickstarter. In fact, I think Kickstarter did a good thing when they removed the Skarp project because the inventors had no fully working prototype. In the end, crowdfunding platforms need to balance between the desire to protect their users from scams, and the fact that it’s very difficult to distinguish between scams and some extremely innovative ideas. At least in this case, it seems Kickstarter decided to err on the side of caution.
While many are asking whether the Skarp laser razor is a scam, it’s the wrong question. The real question is what purpose Kickstarter and other crowdfunding platforms should have in our modern society, and the honest answer is probably that the users of these platforms have a better chance of seeing their money dissipating into thin air – but altogether that’s a good thing, since more innovators overall get supported – and the few who succeed, change the world.
So go ahead: support Skarp on Indiegogo, or any other crazy idea on Kickstarter, Tilt and the other crowdfunding platforms out there. Buy that new (barely functioning) 3D-printer, the shiniest (and fragile) aerial drone, or that dream-reader that doesn’t really work. Go ahead – now you have the justification for it: you’re promoting innovation in society. Or in other words – bring on the scams!
Shaving is one of the great hardships of my life (and I guess I should consider myself lucky that this is one of my top worries). Up until recent years there have only been two giants in the shaving market: Schick and Gillette. Both are engineering their razor blades with space-age technology, promising you a blade that looks and feels as if it were found floating in space, shining magnificently in the Sun’s bright rays.
And it stings. Oh, how it stings my skin.
Both companies are trying to minimize cuts to their customers’ skin, obviously, but getting the nicking frequency down to zero is a daunting task, and probably impossible. We’re dealing with blades here, after all, sharpened to the point where they could (allegedly) cut air molecules in twine. As the book of Proverbs admonishes us: “Can a man carry fire in his lap, without burning his clothes?”
I would think that the burning clothes would be of the least concern to the guy carrying fire in his lap (please don’t do that), but the point is clear. You play with fire, you get burned. You play with razors, you get cut.
Well, then, why don’t we change the paradigm of using a razor blade for shaving? That’s exactly the idea behind the Skarp Razor project, which has recently surged to new heights on everybody’s favorite crowdfunding platform: Kickstarter.
The basic idea is pretty simple. Instead of blades, the Skarp ‘razor’ is utilizing a small laser beam with a wavelength that was selected specifically to cut human hairs. It does not cut or burn the skin, needs no shaving foam, and only requires one AAA battery every month. Those, at least, are the promises on the campaign site.
The inventor behind the new blade, Morgan Gustavsson, has worked in the medical & cosmetic laser industry for three decades, and invented and patented the most common method for hair removal using laser in cosmetic beauty salons. Now he’s perfected and miniaturized the technology (again, according to the campaign’s claims which should be taken with a grain of salt) to bring it to everyone’s households.
If the Skarp Razor actually delivers on the promises made, the consequences would be used, and would essentially disrupt the stagnated shaving industry. Schick and Gillette have both competed under a very limited paradigm: shaving is to be done with blades only. Their entire business model revolves around the sale of high-priced blades. How can they handle a competitor that sells only one razor that should last for nearly a lifetime of shaving?
Short answer: they can’t, at least not under their current business model. Unless they find a new breakthrough technology of their own, their business model will be disrupted within a year, and they may well find themselves on the ropes in five years or less. This may be yet another Kodak Moment: a huge industry giant in its field, which gets disrupted following an innovation that reaches to the masses (digital cameras in smartphones), and declares bankruptcy five years later.
The possible disruption of this $4.13 billion market reveals an important principle of today’s industry, which has been mentioned before by Peter Diamandis, founder and chairman of the X-Prize foundation and co-founder of Singularity University: “If you don’t disrupt yourself, somebody else will.”
This principle is particularly relevant in the case of Schick and Gillette. The two giants have not faced any real competition except for each other for a long time now, and were thus unwilling to change their basic operating paradigms. They innovated, decorated and re-innovated their blades, but they did not find new ideas and concepts to re-think the process of shaving. Now, when the laser blade makes an appearance, they will need to frantically look for new answers for the threat.
Of course, nobody can forecast the future accurately, and the new laser shaving technology defies any attempt at foresight right now because we don’t know how it works exactly. Furthermore, the initial product that will be delivered to consumers next year is bound to be in a preliminary state: primitive and rough, and almost certainly disappointing for the wide public. The Skarp 2.0 will be infinitely better and more suitable for the needs and wishes of the consumers – but only if the company survives the first disappointment.
We can’t know yet whether the Skarp Razor is about to disrupt the shaving industry, especially since at the moment it’s no more than a promise on a crowdfunding site. However, if the invention does have merit and proves itself over the next year, the shaving industry giants will find themselves in a race against a new technology that they were not prepared for. I, for one, welcome such competition that will lower the prices of blades, and force the old guard to re-innovate and rethink their existing products and business models. I don’t envy the people at Gillette and Schick, though, for whom the next decade is going to be a hair-raising rollercoaster.
While visiting the Roger Williams Park Zoo in Rhode Island, I happened to take this photo of genetically modified pumpkins displaying a wide range of advertising materials, apparently for the corporate sponsors of zoo activities.
Well, obviously the pumpkins aren’t actually genetically modified – they were just painted or sculpted by human artists – but in the rate genetic engineering is progressing, it’s quite possible that in a few decades we will have genetically modified fruits and vegetables that actually display readable advertisement on them as they grow up.
Now wouldn’t that be interesting?
I decided to take this chance and consider innovative ways in which future GMOs (Genetically Modified Organisms) could be used to promote and advertise products, ideas and corporates. In order to do that I utilized a fascinating systemic thinking system for innovation around which an entire consulting company called SIT (Systematic Inventive Thinking) was founded.
The principles of the SIT system have been described in a 2003 article in Harvard Business Review. In short, the main idea is to limit your creativity instead of trying to stretch it sky-high. Why is that so important? Consider that you’re on a first date, and the girl (or boy) is leaning forward across the table and is asking you that ages-old question: “Tell me about yourself!”
If you’re like most human beings, you probably freeze in complete bewilderment, unsure where to begin or to end, and what you should actually talk about. You’re lost in the chaos of your own mind, sinking below the waves of many thoughts and impulses: should you tell her about your trip to India? Or maybe about your ambitions for the future? Or maybe she really wants to hear about your bar-mitzvah?
Coming up with creative and innovative ideas is similar to dating, at least in this view. Many executives tell their staff to find and implement creative ideas in their product, leaving them floundering and resentful. Many (too many) creativity workshops look that way too: with round tables of employees and executives who are told to be creative and just to “think up a new innovative product for the company!”
Such exercises rarely lead to good results. At best, the participants fall back on whatever ideas they’ve read or thought about before, and almost no new or innovative notions are being produced at those meetings.
Now consider the alternative dating scene: your date asks you a very simple question – “What did you eat this morning?” In this case, the answer is clear. You have a starting point that is safe and sound, and while admittedly it is not very interesting, the conversation and the jokes can start flowing from that point onwards. It works the same way with creativity: by putting constraints on your thinking process in a systematic fashion, you’re actually capable of analyzing the situation in an orderly way, and develop each innovative case fully at a time.
The SIT method places constraints over the innovation process by forcing the thinkers to consider innovative changes to the current product in only five different directions: subtraction, multiplication, division, task unification, and attribute dependency. Let’s go over each one to think up innovative ways GE plants could be used for advertisement.
SIT Thinking Tools
Subtraction means that instead of our natural tendency to add features to an existing product, we remove existing features, particularly the kind that seem vital and necessary.
How does this thinking tool relate to GMO? Well, what would happen if we were to engineer a fruit without its skin or outer covering? The skin of the fruit obviously serves to protect the soft and squishy interior, so it’s definitely an important part of the product. However, maybe we could make the skin thinner and translucent, so that the consumer would see what they’re getting inside the fruit: they’ll see whether the banana has dark stains on its edible part, and if the tomato is rotten or has worms. That would certainly be an interesting advertisement maneuver: “We don’t have anything to hide!”
By applying the multiplication thinking tool, we multiply – add more copies of – certain existing components of the product, but then alter them in a significant way. Gillette’s double-bladed razor is a well-known example: they added an extra blade, and then found a different use for it on the other side of the razor.
How about, then, that we engineer the fruit to contain more seeds – but ones that are actually viable, and grow into some interesting and different kinds of fruit? The fruit’s manufacturer could bring the fruit to market as a tool for teaching children about the natural world, and even create a competition to find that “one golden seed” hiding in every one fruit out of a hundred, and out of which a truly extraordinary fruit will grow.
The division tool makes us divide the product into its separate components – and then recombine them in some new way. In the case of genetically modified fruit, we can roughly separate the ‘product’ into seeds, edible flesh, skin and a stem. How can we mix the three to make the final product more valuable for advertisers? Here’s an idea: make the seeds grow on the surface of the fruit, but make them as small as speckles, adding a shine to the fruit. Or maybe make the stem go through the entire fruit, like a skewer, and promote the fruit as one that can be easily roasted over a grill.
Which two tasks can be unified into a single component of the fruit? This one is easy: make the stem tasty, so that it can be eaten as a snack next to the fleshy fruit. One can also imagine fruits that contain therapeutic materials, so that eating them serves a double purpose: get thin, and get healthy.
Attribute Dependency Change
The components and attributes of every product depend, in part, on its environment. Shoes for girls, for example, often come in pink (attribute: color). Watermelons are often sold in summertime, which is another relation between an attribute (time of sale) and the product.
Using this thinking tool, we can really go wild. If we only focus on color as an attribute, we can engineer fruit that changes its color visibly when it’s infected by certain bacteria, or that its color can tell when the fruit was picked up from the field, assuring the consumer that they’re getting fresh produce. And this is just the beginning, since we can also play with the smell, touch, and even weight and size of the fruit. So many opportunities here!
You may or may not like the ideas I gave for genetic engineering of plants. Regardless, this post was primarily an exercise in innovative thinking meant to provide a sneak peek at a wonderful methodology for innovation. You are warmly invited to suggest more ideas for genetic engineering of plants in the comments section, using the SIT methodology as a guide. And of course, you can use the principles of the SIT Methodology to innovate your own ideas for a product, service or company.
I’m sure you’ll make good use of the methodology, and will discover that innovating under constraints is as useful as it is fun.