Lessons from the Panama Papers

Welcome to the world without secrets.

We’ve all known for decades that politicians have used tax shelters for money laundering purposes, to avoid paying tax in their countries, and to avoid being identified with companies they were affiliated with in the past. Now the cat is out of the sack, with a new leak called The Panama Papers.

The Panama Papers contain about 11.5 million highly confidential documents that have detailed information about the dealings of more than 214,000 offshore companies. Such companies are most often used for money laundering and for obscuring connections between assets and their owners. Offshore companies are particularly useful to politicians, many of whom are required to declare their interests and investments in companies, and are usually required by law to forego any such relations in order to prevent corruption.

It doesn’t look like that law is working too well.

BBC News covered the initial revealings from the Panama Papers in the following words –

“The documents show 12 current or former heads of state and at least 60 people linked to current or former world leaders in the data. They include the Icelandic Prime Minister, Sigmundur Gunnlaugson, who had an undeclared interest linked to his wife’s wealth and is now facing calls for his resignation. The files also reveal a suspected billion-dollar money laundering ring involving close associates of Russian President Vladimir Putin.”

According to Aamna Mohdin, the Panama Papers event is the largest leak to date by a fair margin. The source, whoever that is, sent more than 2.6 terabytes of information from the Panama based company Mossack Fonseca.

atlas_EJHhGmiCe@2x
Source: Atlas.

 

What lessons can we derive from the Panama Papers event?

 

Journalism vs. Government

The leaked documents have been passed directly to one of Germany’s leading newspapers, Süddeutsche Zeitung, which shared them with the International Consortium of Investigative Journalists (ICIJ). Altogether, 109 media organizations in 76 countries have been analyzing the documents over the last year.

This state of affairs begs the question: why weren’t national and international police forces involved in the investigation? The answer seems obvious: the ICIJ had good reason to believe that suspected heads of states would nip such investigation in the bud, and also alert Mossack Fonseca and its clients to the existence of the leak. In other words, journalists in 76 countries worked in secrecy under the noses of their governments, despite the fact that those very governments were supposed to help prevent international tax crimes.

press-1182348_1280

Artificial Intelligence can Help Fight Corruption

The amount of leaked documents is massive. No other word for it. As Wikipedia details –

The leak comprises 4,804,618 emails, 3,047,306 database format files, 2,154,264 PDFs, 1,117,026 images, 320,166 text files, and 2,242 files in other formats.

All of this data had to be indexed so that human beings could go through it and make sense of it. To do that, the documents were processed using optical character recognition systems that made the data machine-readable. Once the texts were searchable, the indexing was essentially performed automatically, with cross-matching of important persons with the relevant data.

Could this investigative act have been performed without having state of the art computers and optical character recognition systems? Probably, but it would’ve required a small army of analysts and investigators, and would’ve made the challenge practically impossible for anything less than a governmental authority to look into the matter. The advance in computer sciences has opened the road for the public (in this case spearheaded by the International Journalists Consortium) to investigate the rich and the powerful.

 

Where’s the Missing Information?

So far, there has been no mention of any American politician with offshore connections to Mossack Fonseca. Does that mean the American politicians are all pure and incorruptible? That doesn’t seem very likely. An alternative hypothesis, proposed on EoinHiggins.com, is that certain geopolitical conditions have made it difficult for Americans to use Panama as a tax shelter. But don’t you worry, folks: other financial firms in several countries are working diligently to provide tax shelters for Americans too.

pants-1255852_1280

These tax shelters have always drawn some of the ire of the public, but never in a major way. The public in many countries has never realized just how much of its tax money was being robbed away for the benefit of those who could afford to pay for such services. As things become clearer now and public outrage begins to erupt, it is quite possible that governmental investigators will have to focus more of their attention on other tax shelter businesses. If that happens, we’ll probably see further revelations on many, many other politicians in upcoming years.

Which leads us to the last lesson learned: we’re going into…

 

A World without Secrets

Wikileaks has demonstrated that there can no longer be secrets in any dealings between diplomats. The Panama Papers show that financial dealings are just as vulnerable. The nature of the problem is simple: when information was stored on paper, leaks were limited in size according to the amount of physical documents that could be whisked away. In the digital world, however, terabytes of information containing millions of documents can be carried in a single external hard drive hidden in one’s pocket, or uploaded online in a matter of hours. Therefore, just one disgruntled employee out of thousands can leak away all of a company’s information.

Can any information be kept secret under these conditions?

The issue becomes even more complicated when you realize that the Mossack Fonseca leaker seems to have acted without any monetary incentive. Maybe he just wanted the information to reach the public, or to take revenge on Mossack Fonseca. Either way, consider how many others will do the same for money or because they’re being blackmailed by other countries. Can you actually believe that the Chinese have no spies in the American Department of Defense, or vice versa? Do you really think that the leaks we’ve heard about are the only ones that have happened?

censorship-610101_1920

Conclusions

The world is rapidly being shaken free of secrets. The Panama Papers event is just one more link in the chain that leads from a world where almost everything was kept in the dark, to a world where everything is known to everyone.

If you think this statement is an exaggeration, consider that in the last five years alone, data breaches at Sony, Anthem, and eBay resulted in the information of more than 300 million customers being exposed to the world. When most of us hear about data breaches like these ones, we’re only concerned about whether our passwords have found their way into hackers’ hands. We tend to forget that the hackers – and anyone who buys the information from them for a dime – also obtain our names, family relations, places of residence, and details about our lives in general and what makes us tick. If you still believe that any information you have online (and maybe on your computer as well) can be kept secret for long, you’re almost certainly fooling yourself.

As the Panama Papers incident shows, this forced transparency does not necessarily have to be a bad thing. It helps us see things for what they are, and understand how the rich and powerful operate. Now the choice is left to us: will we try to go back to a world where everything is hidden away, and tell ourselves beautiful stories about our honest leaders – or will we accept reality and create and enforce better laws to combat tax shelters?

 

The Citizens Who Solve the World’s Problems

It’s always nice when news items that support each other and indicate a certain future appear in the same week, especially when each of them is exciting on its own. Last week we’ve seen this happening with three different news items:

  1. A scientific finding that a single bacteria type grows 60 percent better in space than on Earth. The germs used in the experiment were collected by the public;
  2. A new Kickstarter project for the creation of a DNA laboratory for everyone;
  3. A new project proposed on a crowdfunding platform, requesting public support for developing the means for rapid detection of Zika virus without the need for a laboratory in Brazil.

Let’s go over each to see how they all come together.

 

Space Microbes

Between the years 2012 and 2014, citizens throughout the United States collected bacteria samples from their environment using cotton swabs, and mailed them to the University of California Davis. Out of the large number of samples that arrived at the lab, 48 strains of germs were isolated and selected to be sent to space, on board the International Space Station (ISS). Most of the bacterial strains behaved similarly on Earth and in space. One strain, however, surpassed all expectations and proliferated rapidly, growing 60% better in space.

Does this mean that the bacteria, going by the name of Bacillus safensis, is better adapted for life in space? I would stay wary of such assertions. We don’t know yet whether the improved growth was a result of the micro-gravity conditions in the space stations, or of some other unquantified factor. It is entirely possible that the levels of humidity, oxygen concentrations, or the quality of the medium were somehow altered or changed on the space station. The result, in short, could easily be a fluke rather than an indicator that some bacteria can grow better in micro-gravity. We’ll have to wait for further evidence before reaching a final conclusion on this issue.

The most exciting thing for me here is that the bacteria in question was collected by the public, in a demonstration of the power of citizen science. People from all over America took part in the project, and as a result of their combined effort, the scientists ended up with a large number of strains, some of which they probably would not have thought to use in the first place. This is one of the main strengths of citizen science: providing many samples of research material for the scientists to analyze and experiment on.

space bell.jpg
Study author Darlene Cavalier swabs the crack of the Liberty Bell to collect bacterial samples. Credit: CC by 4.0

DNA Labs for Everyone

Have you always wanted to check your own DNA? To find out whether you have a certain variant of a gene, or identify the animals whose meat appears in your hamburger? Well, now you can do that easily by ordering the Bento Lab: “A DNA laboratory for everyone”.

The laptop-sized lab includes a centrifuge for the extraction of DNA from biological samples, a PCR thermocycler to target specific DNA sequences, and an illuminated gel unit to visualize the results and ascertain whether or not the sample contains the DNA sequence you were looking after. All that, for the price of less than one thousand dollars. This is ridiculously cheap, particularly when you understand that similar lab equipment easily have cost tens of thousands of dollars just twenty years ago.

The Bento Lab - Citizen Science for DNA analysis
The Bento Lab

The Kickstarter project has already gained support from 395 backers, pledging nearly $150,000 to the cause, and surpassing the goal by 250% in just ten days. That’s an amazing progress for a project that’s really only suitable for hard-core makers and bio-hackers.

Why is the Bento Lab so exciting? Because it gives power to the people. The current model is very limited, but the next versions of mobile labs will contain better equipment and provide better capabilities to the bio-hackers who purchase them. You don’t have to be a futurist to say that – already there are other projects attempting to bring CRISPR technology for highly-efficient gene editing to the masses.

This, then, is a great example for the ways citizen science is going to keep on evolving: people won’t just collect bacterial samples in the streets and send them to distinguished scientists. Instead, private people – joes shmoes like you and me – will be able to experiment on these bacteria in their homes and garages.

Should you be scared? Obviously, yeah. The power to re-engineer biology is nothing to scoff at, and we will need to think up ways to regulate public bio-engineering. However, the public could also use this kind of power to contribute to scientific projects around the world, to conduct DNA sequencing of one’s own genetics, and eventually to create biological therapeutics in one’s own house.

Which brings us to the last news item I wanted to write about in this post: citizens developing means for rapid detection of Zika virus.

 

Entrepreneurs against Viruses

The Zika virus has begun spreading rapidly in Brazil, with devastating consequences. The virus can spread from pregnant women to their fetuses, and has been linked to a serious birth defect of the brain called microcephaly in babies. According to the Center for Disease Control and Prevention, the virus likely will continue to spread to new areas.

Despite the fact that the World Health Organization declared Zika virus a public health emergency merely two months ago, citizen scientists are already working diligently to develop new ways to detect the virus. A UK-IL-BR team has sprung up, with young biotech entrepreneurs leading and doing research to create a better system for rapid detection of the virus in human beings and mosquitos. The group is now requesting the public to chip in and back the project, and has already gathered nearly $6,000.

This initiative is a result of the movement that brings the capabilities to do science to everyone. When every citizen armed with an undergraduate degree in biology can do science in his or her home, we shouldn’t be surprised when new methods for the detection of viruses crop up in distant places around the world. We’re basically decentralizing the scientific community – and as a result can have many more people working on strange and wonderful ideas, some of which will actually bear fruit to the benefit of all.

 

Conclusions

As scientific devices and appliances become cheaper and make their way to the hands of individuals around the world, citizen science becomes more popular and provides ever greater impact. Today we see the uprising of the citizen scientists – those that are not supported by universities or research centers, but instead start conducting experiments in their homes.

In a decade from now, we will see at least one therapeutic being manufactured by citizen scientists in an easy and cheap manner that will undermine the expensive prices demanded by pharma companies for their drugs. Heck, even kids would be able to deliver that kind of science in garage labs. Less than a decade later, we will witness citizen scientists actually conducting medical research on their own, by running analysis over medical records of hundreds – maybe millions – of people to uncover how new or existing therapeutics can be used to treat certain medical conditions. Many of these research projects will not be supported by the government or big pharma with the intent to make money, but will instead be supported by the public itself on crowdfunding sites.

Of course, for all that to happen we need to support citizen scientists today. So go ahead – contribute to the campaign against Zika, or purchase a Bento Lab for your kitchen, or find a citizen science projects or games for kids you can join in SciStarter. We all can take part in improving science, together.

 

Visit other posts in my blog about crowdfunding projects, such as Robit: A new contender in the field of house robots; or read my analysis Why crowdfunding scams are good for society.

How I Became a Dreaded Zionist Robotic Spy, or – Why We Need a Privacy Standard for Robots

It all began in a horribly innocent fashion, as such things often do. The Center for Middle East Studies in Brown University, near my home, has held a “public discussion” about the futures of Palestinians in Israel. Naturally, as a Israeli living in the States, I’m still very much interested in this area, so I took a look at the panelist list and discovered immediately they all came from the same background and with the same point of view: Israel was the colonialist oppressor and that was pretty much all there was to it in their view.

MES 3-3-16 Critical Conversations.gif

Quite frankly, this seemed bizarre to me: how can you have a discussion about the future of a people in a region, without understanding the complexities of their geopolitical situation? How can you talk about the future in a war-torn region like the Middle East, when nobody speaks about security issues, or provides the state of mind of the Israeli citizens or government? In short, how can you have a discussion when all the panelists say exactly the same thing?

So I decided to do something about it, and therein lies my downfall.

I am the proud co-founder of TeleBuddy – a robotics services start-up company that operates telepresence robots worldwide. If you want to reach somewhere far away – Israel, California, or even China – we can place a robot there so that instead of wasting time and health flying, you can just log into the robot and be there immediately. We mainly use Double Robotics‘ robots, and since I had one free for use, I immediately thought we could use the robots to bring a representative of the Israeli point of view to the panel – in a robotic body.

Things began moving in a blur from that point. I obtained permission from Prof. Beshara Doumani, who organized the panel, to bring a robot to the place. StandWithUs – an organization that disseminates information about Israel in the United States – has graciously agreed to send a representative by the name of Shahar Azani to log into the robot, and so it happened that I came to the event with possibly the first ever robotic-diplomat.

2016-03-03 17.48.13

Things went very well in the event itself. While my robotic friend was not allowed to speak from the stage, he talked with people in the venue before the event began, and had plenty of fun. Some of the people in the event seemed excited about the robot. Others were reluctant to approach him, so he talked with other people instead. The entire thing was very civil, as other participants in the panel later remarked. I really thought we found a good use for the robot, and even suggested to the organizers that next time they could use TeleBuddy’s robots to ‘teleport’ a different representative – maybe a Palestinian – to their event. I went home happily, feeling I made just a little bit of a difference in the world and contributed to an actual discussion between the two sides in a conflict.

A few days later, Open Hillel published a statement about the event, as follows –

“In a dystopian twist, the latest development in the attack on open discourse by right-wing pro-Israel groups appears to be the use of robots to police academic discourse. At a March 3, 2016 event about Palestinian citizens of Israel sponsored by Middle East Studies at Brown University, a robot attended and accosted students. The robot used an iPad to display a man from StandWithUs, which receives funding from Israel’s government.

Before the event began, students say, the robot approached students and harassed them about why they were attending the event. Students declined to engage with this bizarre form of intimidation and ignored the robot. At the event itself, the robot and the StandWithUs affiliate remained in the back. During the question and answer session, the man briefly left the robot’s side to ask a question.

It is not yet known whether this was the first use of a robot to monitor Israel-Palestine discourse on campus. … Open Hillel opposes the attempts of groups like StandWithUs to monitor students and faculty. As a student-led grassroots campaign supported by young alumni, professors, and rabbis, Open Hillel rejects any attempt to stifle or target student or faculty activists. The use of robots for purposes of surveillance endangers the ability of students and faculty to learn and discuss this issue. We call upon outside groups such as StandWithUs to conduct themselves in accordance with the academic principles of open discourse and debate.”

 

 

I later met accidentally with some of the students who were in the event, and asked them why they believed the robot was used for surveillance, or to harass students. In return, they accused me of being a spy for the Israeli government. Why? Obviously, because I operated a “surveillance drone” on American soil. That’s perfect circular logic.

 

Lessons

There are lessons aplenty to be obtained from this bizarre incident, but the one that strikes me in particular is that you can’t easily ignore existing cultural sentiments and paradigms without taking a hit in the process. The robot was obviously not a surveillance drone, or meant for surveillance of any kind, but Open Hillel managed to rebrand it by relying on fears that have deep-roots in the American public. They did it to promote their own goals of getting some PR, and they did it so skillfully that I can’t help but applaud them for it. Quite frankly, I wish their PR guys were working for me.

That said, there are issues here that need to be dealt with if telepresence robots ever want to become part of critical discussions. The fear that the robot may be recording or taking pictures in an event is justified – a tech-savvy person controlling the robot could certainly find a way to do that. However, I can’t help but feel that there are less-clever ways to accomplish that, such as using one’s smartphone, or the covert Memoto Lifelogging camera. If you fear being recorded on public, you should know that telepresence robots are probably the least of your concerns.

 

Conclusions

The honest truth is that this is a brand new field for everyone involved. How should robots behave at conferences? Nobody knows. How should they talk with human beings at panels or public events? Nobody can tell yet. How can we make human beings feel more comfortable when they are in the same perimeter with a suit-wearing robot that can potentially record everything it sees? Nobody has any clue whatsoever.

These issues should be taken into consideration in any venture to involve robots in the public sphere.

It seems to me that we need some kind of a standard, to be developed in a collaboration between ethicists, social scientists and roboticists, which will ensure a high level of data encryption for telepresence robots and an assurance that any data collected by the robot will be deleted on the spot.

We need, in short, to develop proper robotic etiquette.

And if we fail to do that, then it shouldn’t really surprise anyone when telepresence robots are branded as “surveillance drones” used by Zionist spies.

Science Just Wants To Be Free

This article was originally published in the Huffington Post

 

For a long time now, scientists were held in thrall by publishers. They worked voluntarily – without getting any pay – as editors and reviewers for the publishers, and they allowed their research to be published in scientific journals without receiving anything out of it. No wonder that scientific publishing had been considered a lucrative business.

Well, that’s no longer the case. Now, scientific publishers are struggling to maintain their stranglehold over scientists. If they succeed, science and the pace of progress will take a hit. Luckily, the entire scientific landscape is turning on them – but a little support from the public will go a long way in ensuring the eventual downfall of an institute that is no longer relevant or useful for society.

To understand why things are changing, we need to look back in history to 1665, when the British Royal Society began publishing research results in a journal form called Philosophical Transactions of the Royal Society. Since the number of pages available in each issue was limited, the editors could only pick the most interesting and credible papers to appear in the journal. As a result, scientists from all over Britain fought to have their research published in the journal, and any scientist whose research was published in an issue gained immediate recognition throughout Britain. Scientists were even willing to become editors for scientific journals, since that was a position that demanded request – and provided them power to push their views and agendas in science.

Thus was the deal struck between scientific publishers and scientists: the journals provided a platform for the scientists to present their research, and the scientists fought tooth and nail to have their papers accepted into the journals – often paying from their pockets for it to happen. The journals publishers then had full copyrights over the papers, to ensure that the same paper would not be published in a competing journal.

That, at least, was the old way for publishing scientific research. The reason that the journal publishers were so successful in the 20th century was that they acted as aggregators and selectors of knowledge. They employed the best scientists in the world as editors (almost always for free) to select the best papers, and they aggregated together all the necessary publishing processes in one place.

And then the internet appeared, along with a host of other automated processes that let every scientist publish and disseminate a new paper with minimal effort. Suddenly, publishing a new scientific paper and making the scientific community aware of it, could have a radical new price tag: it could be completely free.

Free Science

Let’s go through the process of publishing a research paper, and see how easy and effortless it became:

  1. The scientist sends the paper to the journal: Can now be conducted easily through the internet, with no cost for mail delivery.
  2. The paper is rerouted to the editor dealing with the paper’s topic: This is done automatically, since the authors specify certain keywords which make sure the right editor gets the paper automatically to her e-mail. Since the editor is actually a scientist volunteering to do the work for the publisher, there’s no cost attached anyway. Neither is there need for a human secretary to spend time and effort on cataloguing papers and sending them to editors manually.
  3. The editor sends the paper to specific scientific reviewers: All the reviewers are working for free, so the publishers don’t spend any money there either.

Let’s assume that the paper was confirmed, and is going to appear in the journal. Now the publisher must:

  1. Paginate, proofread, typeset, and ensure the use of proper graphics in the paper: These tasks are now performed nearly automatically using word processing programs, and are usually handled by the original authors of the paper.
  2. Print and distribute the journal: This is the only step that costs actual money by necessity, since it is performed in the physical world, and atoms are notoriously more expensive than bits. But do we even need this step anymore? I have been walking around in the corridors of the academy for more than ten years, and I’ve yet to see a scientist with his nose buried in a printed journal. Instead, scientists are reading the papers on their computer screens, or printing them in their offices. The mass-printed version is almost completely redundant. There is simply no need for it.

In conclusion, it’s easy to see that while the publishers served an important role in science a few decades ago, they are just not necessary today. The above steps can easily be conducted by community-managed sites like Arxive, and even the selection process of high quality papers can be performed today by the scientist themselves, in forums like Faculty of 1000.

The publishers have become redundant. But worse than that: they are damaging the progress of science and technology.

The New Producers of Knowledge

In a few years from now, the producers of knowledge will not be human scientists but computer programs and algorithms. Programs like IBM’s Watson will skim through hundreds of thousands of research papers and derive new meanings and insights from them. This would be an entirely new field of scientific research: retrospective research.

Computerized retrospective research is happening right now. A new model in developmental biology, for example, was discovered by an artificial intelligence engine that went over just 16 experiments published in the past. Imagine what would happen when AI algorithms cross and match together thousands papers from different disciplines, and come up with new theories and models that are supported by the research of thousands of scientists from the past!

For that to happen, however, the programs need to be able to go over the vast number of research papers out there, most of which are copyrighted, and held in the hands of the publishers.

You may say this is not a real problem. After all, IBM and other large data companies can easily cover the millions of dollars which the publishers will demand annually for access to the scientific content. What will the academic researchers do, though? Many of them do not enjoy the backing of the big industry, and will not have access to scientific data from the past. Even top academic institutes like Harvard University find themselves hard-pressed to cover the annual costs demanded by the publishers for accessing papers from the past.

Many ventures for using this data are based on the assumption that information is essentially free. We know that Google is wary of uploading scanned books from the last few decades, even if these books are no longer in circulation. Google doesn’t want to be sued by the copyrights holders – and thus is waiting for the copyrights to expire before it uploads the entire book – and lets the public enjoy it for free. So many free projects could be conducted to derive scientific insights from literally millions of research papers from the past. Are we really going to wait for nearly a hundred years before we can use all that knowledge? Knowledge, I should mention, that was gathered by scientists funded by the public – and should thus remain in the hands of the public.

 

What Can We Do?

Scientific publishers are slowly dying, while free publication and open access to papers are becoming the norm. The process of transition, though, is going to take a long time still, and provides no easy and immediate solution for all those millions of research papers from the last century. What can we do about them?

Here’s one proposal. It’s radical, but it highlights one possible way of action: have the government, or an international coalition of governments, purchase the copyrights for all copyrighted scientific papers, and open them to the public. The venture will cost a few billion dollars, true, but it will only have to occur once for the entire scientific publishing field to change its face. It will set to right the ancient wrong of hiding research under paywalls. That wrong was necessary in the past when we needed the publishers, but now there is simply no justification for it. Most importantly, this move will mean that science can accelerate its pace by easily relying on the roots cultivated by past generations of scientists.

If governments don’t do that, the public will. Already we see the rise of websites like Sci-Hub, which provide free (i.e. pirated) access to more than 47 million research papers. Having been persecuted by both the publishers and the government, Sci-Hub has just recently been forced to move to the Darknet, which is the dark and anonymous section of the internet. Scientists who will want to browse through past research results – that were almost entirely paid for by the public – will thus have to move over to the Darknet, which is where weapon smugglers, pedophiles and drug dealers lurk today. That’s a sad turn of events that should make you think. Just be careful not to sell your thoughts to the scholarly publishers, or they may never see the light of day.

 

Dr Roey Tzezana is a senior analyst at Wikistrat, an academic manager of foresight courses at Tel Aviv University, blogger at Curating The Future, the director of the Simpolitix project for political forecasting, and founder of TeleBuddy.

Robit: A New Contender in the Field of House Robots

The field of house robots is abuzz in the last two years. It began with Jibo – the first cheap house robot that was originally advertised on Indiegogo and gathered nearly $4 million. Jibo doesn’t look at all like Asimov’s vision of humanoid robots. Instead, it resembles a small cartoon-like version of Eve from the Wall-E movie. Jibo can understand voice commands, recognize and track faces, and even take pictures of family members and speak and interact with them. It can do all that for just $750 – which seems like a reasonable deal for a house robot. Romo is another house robot for just $150 or so, with a cute face and a quirky attitude, which has sadly gone out of production last year.

 

robots.jpg
Pictures of house robots: Pepper (~$1,600), Jibo (~$750), Romo (~$130). Image on the right originally from That’s Really Possible.

 

Now comes a new contender in the field of house robots: Robit, “The Robot That Gets Things Done”. It moves around the house on its three wheels, wakes you up in the morning, looks after lost items like your shoes or keys on the floor, detects smoke and room temperature, and even delivers beer for you on a tray. And it’s doing all that for just $349 on Indiegogo.

robit.gif

I interviewed Shlomo Schwarcz, co-founder & CEO at Robit Robot, about Robit and the present and future of house robots. Schwarcz emphasized that unlike Jibo, Robit is not supposed to be a ‘social robot’. You’re not supposed to talk with it or have a meaningful relationship with it. Instead, it is your personal servant around the house.

“You choose the app (guard the house, watch your pet, play a game, dance, track objects, find your list keys, etc.) and Robit does it. We believe people want a Robit that can perform useful things around the house rather than just chat.”

It’s an interesting choice, and it seems that other aspects of Robit conform to it. While Jibo and Romo are pleasant to look at, Robit’s appearance can be somewhat frightening, with a head that resembles that of a human baby. The question is, can Robit actually do everything promised in the campaign? Schwarcz mentions that Robit is essentially a mobile platform that runs apps, and the developers have created apps that cover the common and basic usages: remote control from a smartphone, movement and face detection, dance, and a “find my things” app.

Other, more sophisticated apps, will probably be left for 3rd parties. These will include Robit analyzing foodstuff and determining its nutritional value, launching toy missiles at items around the house using a tiny missile launcher, and keeping watch over your cat so that it doesn’t climb on that precious sofa that used to belong to your mother in law. These are all great ideas, but they still need to be developed by 3rd parties.

This is where the Robit both wins and fails at the same time. The developers realized that no robotic device in the near future is going to be a standalone achievement. They are all going to be connected together, learn from each other and share insights by means of a virtual app market that can be updated every second. When used that way, robots everywhere can evolve much more rapidly. And as Shwarcz says –

“…Our vision [is] that people will help train robots and robots will teach each other! Assuming all Robits are connected to the cloud, one person can teach a Robit to identify, say a can and this information can be shared in the cloud and other Robits can download it and become smarter. We call these bits of data “insights”. An insight can be identifying something, understanding a situation, a proper response to an event or even just an eye and face expression. Robots can teach each other, people will vote for insights and in short time they will simply turn themselves to become more and more intelligent.”

That’s an important vision for the future, and one that I fully agree with. The only problem is that it requires the creation of an app market for a device that is not yet out there on the market and in people’s houses. The iPhone app store was an overnight success because the device reached the hands of millions in the first year to its existence, and probably because it also was an organic continuation of the iTunes brand. At the moment, though, there is no similar app management system for robots, and certainly not enough robots out there to justify the creation of such a system.

At the moment, the Robit crowdfunding campaign is progressing slowly. I hope that Robit makes it through, since it’s an innovative idea for a house robot, and definitely has potential. Whether it succeeds or fails, the campaign mainly shows that the house robots concept is one that innovators worldwide are rapidly becoming attached to, and are trying to find the best ways to implement. In twenty years from now, we’ll laugh about all the whacky ideas these innovators had, but the best of those ideas – those that survived the test of time and market – will serve us in our houses. Seen from that aspect, Shwarcz is one of those countless unsung heroes: the ones who try to make a change in a market that nobody understands, and dare greatly.

Will he succeed? That’s for the future to decide.

 

 

Images of Israeli War Machines from 2048

Do you want to know what war would look like in 2048? The Israeli artist Pavel Postovit has drawn a series of remarkable images depicting soldiers, robots and mechs – all in the service of the Israeli army in 2048. He even drew aerial ships resembling the infamous Triskelion from The Avengers (which had an unfortunate tendency to crash every second week or so).

Pavel is not the first artist to make an attempt to envision the future of war. Jakub Rozalski before him tried to reimagine World War II with robots, and Simon Stalenhag has many drawings that demonstrate what warfare could look like in the future. Their drawings, obviously, are a way to forecast possible futures and bring them to our attention.

Pavel’s drawings may not based on rigorous foresight research, but they don’t have to be. They are mainly focused on showing us one way the future may be unfurled. Pavel himself does not pretend to be a futures researcher, and told me that –

“I was influenced by all kind of different things – Elysium, District 9 [both are sci-fi movies from the last few years], and from my military service. I was in field intelligence, on the border with Syria, and was constantly exposed to all kinds of weapons, both ours and the Syrians.”

Here are a couple of drawings to make you understand Pavel’s vision of the future, divided according to categories I added. Be aware that the last picture is the most haunting of all.

 

Mechs in the Battlefield

Mechs are a form of ground vehicles with legs – much like Boston Dymanic’s Alpha Dog, which they are presumbaly based on. The most innovative of those mechs is the DreamCatcher – a unit with arms and hands that is used to collect “biological intelligence in hostile territory”. In one particularly disturbing image we can see why it’s called “DreamCatcher”, as the mech beheads a deceased human fighter and takes the head for inspection.

b93e7f27692961.5636946fc1475.jpg

Apparently, mechs in Pavel’s future are working almost autonomously – they can reach hostile areas on the battlefield and carry out complicated tasks on their own.

 

Soldiers and Aerial Drones

Soldiers in the field will be companied by aerial drones. Some of the drones will be larger than others – the Tinkerbell, for example, can serve both for recon and personal CAS (Close Air Support) for the individual soldier.

97d79927684283.5636910467ed2.jpg

Other aerial drones will be much smaller, and will be deployed as a swarm. The Blackmoth, for example, is a swarm of stealthy micro-UAVs used to gather tactical intelligence on the battlefield.

f4bb2a27684283.5636947973985.jpg

 

Technology vs. Simplicity

Throughout Pavel’s visions of the future we can see a repeated pattern: the technological prowess of the west is going to collide with the simple lifestyle of natives. Since the images depict the Israeli army, it’s obvious why the machines are essentially fighting or constraining the Palestinians. You can see in the images below what life might look like in 2048 for Arab civillians and combatants.

471c3e27692961.56369472000a8.jpg

Another interesting picture shows Arab combatants dealing with a heavily armed combat mech by trying to make it lose its balance. At the same time, one of the combatants is sitting to the side with a laptop – presumbaly trying to hack into the robot.

431d1327692961.5636946fd2add.jpg

 

The Last Image

If the images above have made you feel somewhat shaken, don’t worry – it’s perfectly normal. You’re seeing here a new kind of warfare, in which robots take extremely active parts against human beings. That’s war for you: brutal and horrible, and there’s nothing much to do against that. If robots can actually minimize the amount of suffering on the battlefield by replacing soldiers, and by carrying out tasks with minimal casualties for both sides – it might actually be better than the human-based model of war.

Perhaps that is why I find the last picture the most horrendous one. You can see in it a combatant, presumably an Arab, with a bloody machette next to him and two prisoners that he’s holding in a cage. The combatant is reading a James Bond book. The symbolism is clear: this is the new kind of terrorist / combatant. He is vicious, ruthless, and well-educated in Western culture – at least well enough to develop his own ideas for using technology to carry out his ideology. In other words, this is an ISIS combatant, who begin to employ some of the technologies of the West like aerial drones, without adhering to moral theories that restrict their use by nations.

ba9a0c31030769.563dbe5189ce8.jpg

 

Conclusion

The future of warfare in Pavel’s vision is beginning to leave the paradigm of human-on-human action, and is rapidly moving into robotic warfare. It is very difficult to think of a military future that does not include robots in it, and obviously we should start thinking right now about the consequences, and how (and whether) we can imbue robots with sufficient autonomous capabilities to carry out missions on their own, while still minimizing casualties on the enemy side.

You can check out the rest of Pavel’s (highly recommended) drawings in THIS LINK.

Review: Star Wars – the Force Awakens… and Falls to Sleep in the Middle of the Movie

I’ve finally had the chance to watch Star Wars – The Force Awakens, and I’m not going to sweeten the deal: It was incredibly mediocre. The director mainly played up on nostalgia value to replace the need for humor, real drama or character development. I’m not saying you shouldn’t watch it – just don’t set your expectations too high.

The really interesting thing in the movie for me, though, was the ongoing Failure of the Paradigm woven throughout the movie. As has often been mentioned in the past, Star Wars is in fact a medieval tale of knights in a shiny armor, a princess in distress (an actual princess! in space!), an evil dark wizard and some father-son unresolved issues. So yeah, we have a civilization that is technologically advanced enough to travel between planets at warp speed without much need for fuel, but we see no similar developments in any other fields: no nano-robots, no human augmentation, no biological warfare, no computer-brain interface, and absolutely no artificial intelligence. And please don’t insult my intelligence by claiming that R2D2 has one.

c-MA-SW-heroesgroup.jpeg
Star Wars: a medieval space tale of knights and damsels in distress. Image originally from GeekTyrant

The question we should be asking is why. Why would any script writer ignore so many of these potential technological developments – some of which are bound to pop up in the next few decades – and focus instead on plots around which countless other stories have been told and retold throughout thousands of years?

The answer is the Failure of Paradigm: we are stuck in the current paradigm of humanity, love, heroes and free will expressed by biological entities. It takes a superb director and script writer – the Wachowskis’ The Matrix comes to mind – to create an excellent movie that makes you rethink those paradigms. But if you stick with the current paradigms, all you need is an average script, an average director and a lot of explosions to create a blockbuster.

Star Wars is a great example of how NOT to make a science fiction movie. It does not explore the boundaries of what’s possible and impossible in any significant way. It does not make us consider the impact of new technologies, or the changing structure of humanity. It sticks to the old lines and old terms: evil vs. good, empire vs. rebels, father vs. son, and a dashing hero with a bumbling damsel in distress (even though the damsel in the new movie is male). It is not science fiction. Instead, it is a fantasy movie.

And that’s great for some people. Heck, maybe even most people. That’s why it’s the ruling paradigm at the moment – it makes people feel happy and content. But I can’t help thinking and regretting the opportunity lost here. A movie with such a huge audience could make people think. The director could have involved a sophisticated AI in the plot, to make people consider the future of working with artificial virtual assistants. Instead we got a clownish robot. And destroying planets with cannons, requiring immense energy output? What evil empire in its right mind would use such an inefficient method? Why not, instead, just reprogram a single bacteria to create ‘grey goo’ – a self-replicating nano-robot that can devour all humans in its path in order to make more replicas of itself?

The answer is obvious: developments like these would make this fictional world too different from anything we’re willing to accept. In a world of sophisticated risk-calculating AI, there’s not much place for heroics. In a world of nano-technology, there’s no place for wasteful explosions. And in a world with brain-machine interfaces, it is entirely possible that there’s no place for love, biological or otherwise. All of these paradigms that are inherent to us would be gone, and that’s a risk most directors and script writers just aren’t willing to take.

So go – watch the new Star Wars movie, for old time sakes. But after you do that, don’t skimp on some other science fiction movies from the last couple of years that force us to rethink our paradigms. I recommend Chappie and Ex Machina from the last year in particular. These movies may not have the same number of eager followers, and in some cases they are quite disturbing (Chappie only received a rating of 31% in Rotten Tomatoes) – but they will make you think between the explosions. And in the end, isn’t that what we should expect from our science fiction?

 

The Future of Genetic Engineering: Following the Eight Pathways of Technological Advancement

The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?

In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.

flash foresight

Eight Pathways of Technological Advancement

Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.

Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.

First Pathway: Dematerialization

By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.

Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.

Second Pathway: Virtualization

Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.

crash dummies
Thanks to virtualization, crash dummies everywhere can relax. Image originally from @TheCrashDummies.

Third Pathway: Mobility

Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.

Fourth Pathway: Product Intelligence

This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

Fifth Pathway: Networking

The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.

Sixth Pathway: Interactivity

As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.

Seventh Pathway: Globalization

Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.

Eighth Pathway: Convergence

Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.

 

How could genetic engineering progress along the Eight Pathways of technological improvement?

 

Pathways for Genetic Engineering

First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.

As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.

6.png
A screenshot from Genome Compiler

How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.

If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.

Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.

Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.

Well, then, what would a convergence of all the above pathways look like?

 

The Convergence of Genetic Engineering

Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.

What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.

 

Conclusion

If you’ve been reading this blog for some time, you may have noticed a recurring pattern: I’ll be inquiring into a certain subject, and then analyzing it according to a certain foresight methodology. Such posts have covered so far the Business Theory of Disruption (used to analyze the future of collectible card games), Causal Layered Analysis (used to analyze the future of aerial drones and of medical mistakes) and Pace Layer Thinking. I hope to go on giving you some orderly and proven methodologies that help thinking about the future.

How you actually use these methodologies in your business, class or salon talk – well, that’s up to you.

 

 

Why “Magic: the Gathering” is Doomed: Lessons from the Business Theory of Disruption

Twenty years ago, when I was young and beautiful, I picked up a wrapped pack of cards in a computer games store, and read for the first time the tag “Magic: the Gathering”. That was the beginning of my long-time romance with the collectible card game. I imported the game to Israel, translated the rules leaflet to Hebrew for my friends, and went on to play semi-professionally for twenty years, up to the point when I became the Israeli champion. The game has pretty much shaped my years as a teenager, and has helped me make friends and meet interesting people from all over the world.

That is why it’s so sad to me to see the state of the game right now, and realize that it is almost certainly doomed to fail in the long run.

 

this-is-why-you-should-be-playing-magic-the-gathering-400903
Magic: The Gathering. The game that has bankrupt thousands of parents.

 

The Rise and Decline of Magic the Gathering

Make no mistake: Magic the Gathering (just Magic in short) is still the top dog among collectible card games in the physical world. According to a report released in 2014, the annual revenue from Magic has grown by 182% between 2009 and 2014, reaching a total value of around $250 million a year. That’s a lot of money, to be sure.

The only problem is that Hearthstone, a digital card game released in the beginning of 2014, has reached annual revenues of around $240 million, in less than two years. I will not be surprised to see the numbers growing even larger that in the future.

This is a bizarre situation. Wizards of the Coast (WotC), the company behind Magic, had twenty years to take the game online and turn it into a success. They failed miserably, and their meager attempts at became a target for scorn and ridicule from players worldwide. While WotC did create an online platform to play Magic on, there were plenty of complaints: for starters, playing was extremely costly since the virtual card packs generally cost the same as packs in the physical world. An evening of playing a draft – a small tournament with only eight players – would’ve cost each player around ten dollars, and would’ve required a time investment of up to four straight hours, much of it wasted in waiting for the other players in the tournament to finish their matches with each other and move on to the next round.

These issues meant that Magic Online was mostly reserved for the top players, who had the money and the willingness to spend it on the game. WotC was aware of the disgruntlement about the state of things, but chose to do nothing – after all, it had no real contenders in the physical or the digital market. What did it have to fear? It had no real reason to change. In fact, the only smart decision WotC managers could take was NOT to take a risk and try to change the online experience, but to keep on making money – and lots of it – from a game that functioned well enough. And they could continue doing so until their business was rudely and abruptly disrupted.

 

The Business Theory of Disruption

The theory of disruption was originally conceived by Harvard Business School professor Clayton M. Christensen, and described in his best-selling book The Innovator’s Dilemma. Christensen has followed the evolution of several industries, particularly hard drives, but also including metalworking, retail stores and tractors. He found out that in each sector, the managers supported research and development, but all that R&D produced only two general kinds of innovations: sustaining innovations and disruptive ones.

41lZYsjj6dL._SX331_BO1,204,203,200_

The sustaining innovations were generally those that the customers asked for: increasing hard drive storage capacity, or making data retrieval faster. They led to obvious improvements, which brought immediate and clear benefit to the company in a competitive market.

The disruptive innovations, on the other hand, were those that completely changed the picture, and actually had a good potential to cost the company money in the short-term. Furthermore, the customers saw little value in them, and so the managers saw no advantage in pursuing these innovations. The company-employed engineers who came up with the ideas for disruptive innovations simply couldn’t find support for them in the company.

A good example for the process of disruption is that of the hard drive industry, a few years before the transition from 8-inch drives to 5.25-inch drives occurred. A quick look at the following parameters of the two contenders, back in 1981, explains immediately why managers in the 8-inch drive manufacturing companies were wary of switching over to the 5.25-inch drive market. The 5.25-inch drives were simply inefficient, and lost the competition with 8-inch drives in almost every parameter, except for their size! And while size is obviously important, the computer market at the time consisted mainly of “minicomputers” – computers that cost ~$25,000, and were the size of a small refrigerator. At that size, the physical volume of the hard drives was simply irrelevant.

Attribute 8-Inch Drives (Minicomputer Market) 5.25-Inch Drives (Desktop Computer Market)
Capacity (megabytes) 60 10
Physical volume (cubic inches) 566 150
Weight (pounds) 21 6
Access time (milliseconds) 30 160
Cost per megabyte $50 $200
Unit cost $3,000 $2,000

The table has been copied from the book The Innovator’s Dilemma by Clayton M. Christensen.

And so, 8-inch drive companies continued to focus on 8-inch drives, while a few renegade engineers opened new companies and worked hard on developing better 5.25-inch drives. In a few years, the 5.25-inch drives were just as efficient as the 8-inch drives, and a new market formed: that of the personal desktop computer. Suddenly, every computer maker in the market needed 5.25-inch drives.

Pdp-11-40
One of the first minicomputers. On display at the Vienna Technical Museum. Image found on Wikipedia.

Now, the 8-inch drive company managers were far from stupid or ignorant. When they saw that there was a market for 5.25-inch drives, they decided to leap on the opportunity as well, and develop their own 5.25-inch drives. Sadly, they were too late. They discovered that it takes time and effort to become acquainted with the demands of the new market, to adapt their manufacturing machinery and to change the entire company’s workflow in order to produce and supply computer makers with 5.25 drives. They joined the competition far too late, and even though they were the leviathans of the industry just ten years ago, they soon sunk to the bottom and were driven out of business.

What happened to the engineers who drove forward the 5.25-inch drives revolution, you may ask? They became CEOs of the new 5.25-inch drive manufacturing companies. A few years later, when their own young engineers came to them and suggested that they invest in developing the new and faulty 3.5-inch drives, they decided that there was no market for this invention right now, no demand for it, and that it’s too inefficient anyway.

Care to guess what happened next? Ten years later, the 3.5-inch drives took over, portable computers utilizing them were everywhere, and the 5.25-inch drive companies crumbled away.

That is the essence of disruption: decisions that make sense in the present, are clearly incorrect in the long term, when markets change. Companies that relax and only invest in sustaining innovations instead of trying to radically change their products and reshape the markets themselves, are doomed to fail. In Peter Diamandis words –

“If you aren’t disrupting yourself, someone else is.”

Now that you understand the basics of the Theory of Disruption, let’s see how it applies to Magic.

 

Magic and Disruption

Wizards of the Coast has been making almost exclusively sustaining improvements over the last twenty years: its talented R&D team focused almost exclusively on releasing new expansions with new cards and new playing mechanics. WotC also tried to disrupt themselves once by creating the Magic Online platform, but failed to support and nurture this disruptive innovation. The online platform remained mainly as an outdated relic – a relic that made money, to be sure, but was slowly becoming irrelevant in the online world of collectible card games.

In the last five years, many other collectible card games reared their heads online, including minor successes like Shadow Era (200,000 players, ~$156,000 annual revenue) and Urban Rivals (estimated ~$140,000 annual revenue). Each of the above made discoveries in the online world: they realized that players need to be offered cards for free, that they need to be lured to play every day, and that the free-to-play model can still prove profitable since the company’s costs are close to zero: the firm doesn’t need to physically print new cards or to distribute them to retailers. But these upstarts were still so small that WotC could afford to effectively ignore them. They didn’t pose a real threat to Magic.

Then Hearthstone burst into existence in 2014, and everything changed.

unnamed.png

Hearthstone’s developers took the best traits of Magic and combined it with all the insights the online gaming industry has developed over recent years. They made the game essentially free to play to attract a large number of players, understanding that their revenues would come from the small fraction of players who spent some money on the game. They minimized time waste by setting a time limit on every player’s turn, and by establishing a rule that players can only act during their own turn (so there’s no need to wait for the other player’s response after every move). They even broke down the Magic draft tournaments of eight people, and made it so that every player who drafted a deck can now play against any other player who drafted a deck at any time. There’s no time waste in Hearthstone – just games to play and fun to be had.

WotC was still deep asleep at that time. In July 2014, Magic brand manager Liz Lamb-Ferro told GamesBeat that –

“If you’re looking for just that immediate face-to-face, back-and-forth action-based game with not a lot of depth to it, then you can find that. … But if you want the extras … then you’re eventually going to find your way to Magic.”

Lamb-Ferro was right – Hearthstone IS a simpler game – but that simplicity streamlines gameplay, and thus makes the game more rapid and enjoyable to many players. And even if we were to accept that Hearthstone does not attract veteran players who “want the extras” (actually, it does), WotC should have realized that other online collectible card games would soon combine Magic’s sophistication with Hearthstone’s mechanisms for streamlining gameplay. And indeed, in 2014 a new game – SolForge – has taken all of the strengths of Hearthstone, while adding a mechanic of card transformation (each card transforming into three different versions of itself) that could only have been possible in card games played online. SolForge doesn’t even have a physical version and could never have one, and the game is already costing Magic a few more veteran players.

This is the point when WotC began realizing that they’re falling far behind the curve. And so, in the middle of 2015 they have released Duels of the Planeswalkers 2016. I won’t even bother detailing all the infuriating problems with the game. Suffice it to say that it has garnered more negative reviews than positive ones, and made clear that WotC were still lagging far behind their competitors in their understanding of the virtual world, user experience, and what players actually want. In short, WotC found themselves in the position of the 8-inch drive manufacturers, realizing suddenly that the market has changed under their noses in less than two years.

 

What Could WotC do?

The sad truth is that WotC can probably do nothing right now to fix Magic. The firm can continue churning out sustaining improvements – new expansions and new exciting cards – but it will find itself hard pressed to take over the digital landscape. Magic is a game that was designed for the physical world, and not for the current frenzied pace of the virtual collectible card games. Magic simply isn’t suitable for the new market, unless WotC changes the rules so much that it’s no longer the same game.

Could WotC change the rules in such a dramatic fashion? Yes, but at a great cost. The company could recreate the game online with new cards and rules, but it would have to invest time and effort in relearning the workings of the virtual world and creating a new platform for the revised Magic. Unfortunately, it’s not clear that WotC will have time to do that with Hearthstone, SolForge and a horde of other card games snarling at its heels. The future of Magic online does not look bright, to say the least.

Does that mean Magic the Gathering will vanish completely? Probably not. The Magic brand is still strong everywhere except for the virtual world, which means that in the next five years the game will remain in existence mostly in the physical world, where it will bring much joy to children in school breaks, and much money to the pockets of WotC. During these five years, WotC will have the opportunity to rethink and recreate the game for the next big market: virtual and augmented reality. If the firm succeeds in that front, it’s possible that Magic will be reinvented for the new-new market. If it fails and elects to keep the game anchored only in the physical world, then Magic will slowly but surely vanish away as the market changes and new and exciting games take over the attention span of the next generation.

That’s what happens when you disregard the Theory of Disruption.

 

Forecast: Flying Cars by 2035

Whenever a futurist talks about the future and lays out all the dazzling wealth technological advancements hold in store for us, there is one question that is always asked by the audience.

“Where is that flying car you promised me?”

Well, we may be drawing near to a future of flying cars. While the road to that future may still be long and arduous, I’m willing to forecast that in twenty years from now we will have flying cars for use by civilians – but only if three technological and societal conditions will be fulfilled by that time.

In order to understand these conditions, let us first examine briefly the history of flying cars, and understand the reasons behind their absence in the present.

 

Flying Cars from the Past

Surprising as it may be, the concept of flying cars has been around far longer than the Back to the Future trilogy. Henry Ford himself had produced in 1926 a rudimentary and experimental ‘flying car’, although really it was more of a mini-airplane for the average American consumer. Despite the excitement from the public, the idea crashed and burned in two years, together with the prototype and its test pilot.

skycar10.jpg
One of the forgotten historical flying cars. A prototype of the Ave Mizar.

Since the 1920s, it seems like innovators and inventors came up with flying cars almost once a decade. You can see pictures of some of these cars in Popular Mechanics’ gallery. Some crashed and burned, in the tradition set by Ford. Others managed to soar sky high. None actually made it to mass production, for two main reasons:

  • Extremely wasteful: flying cars are extremely wasteful in terms of fuel consumption. Their energy efficiency is abysmal when compared to that of high-altitude and high-speed airplanes.
  • Extremely unsafe: let’s be honest for a moment, OK? You give people cars that can drive in what is essentially a one-dimensional road, and what do they do? They make traffic accidents. What do you think would happen if you gave everyone the ability to drive a car in three dimensions? Crash, crash and burn all over again. For flying cars to become widely used in society, everyone needs to take flying lessons. Good luck with that.

These two limitations together made sure that flying cars to the masses were left a fantasy – and still largely are. In fact, I would go as far as saying that any new concept or prototype of a flying car that does not take these challenges into account, is only presented to the public as a ‘flying car’ as a publicity stunt.

But now, things are beginning to change, because of three trends that together will provide answers to the main barriers standing in the way of flying cars.

 

The Three Trends that will Enable Flying Cars

There are three trends that, combined, will enable the use of flying cars by the public within twenty years.

First Trend: Massive Improvement in Aerial Drones Capabilities

If you visit your city’s playgrounds, you may find children there having fun flying drones around. The drones they’re using – which often cost less than $200 – would’ve considered highly sophisticated weapons of war just twenty years ago, and would’ve been sold by arms manufactures at prices in the order of millions of dollars.

bendboydrone.jpg
14 years old Morgan Tien with his drone. Source: Bend Bulletin

Dr. Peter Diamandis, innovator, billionaire and futurist, has written in 2014 about the massive improvement in capabilities of aerial drones. Briefly, current-day drones are a product of exponential improvement in computing elements (inertial measurement units), communications (GPS receivers and system), and even sensors (digital cameras). All of the above – at their current sizes and prices – would not have been available even ten years ago.

Aerial drones are important for many reasons, not least because they may yet serve as the basis for a flying car. Innovators, makers and even firms today are beginning to strap together several drones, and turn them into a flying platform that can carry individuals around.

The most striking example of this kind comes from a Canadian inventor who has recently flown 275 meters on a drone platform he has basically fashioned in his garage.

Another, a more cumbersome version of Human-Transportation Drones (Let’s call them HTD from now on, shall we?) was demonstrated this week at the Las Vegas Convention Center. It is essentially a tiny helicopter with four double-propellers attached, much like a large drone. It has place for just one traveler, and can fly up to 23 minutes according to the manufacturers. Most importantly, the Ehang 184 as it’s called is supposed to be autonomous, which brings us straight to the next trend: the rise of machine intelligence.

ehang-184-aav-passenger-drone-12.jpeg
Ehang 184. Credit: Ehang. Originally found on Gizmag.

Second Trend: Machine Intelligence and Flying Cars

There can be little question that drones will keep on improving in their capabilities. We will improve our understanding of the science and technology behind aerial drones, and develop more efficient tools for aerial travel, including some that will carry people around. But will these tools be available for mass-use?

This is where the safety barrier comes into the picture. You can’t let the ordinary Joe Shmoe control a vehicle like the Ehang 184, or even a light-weight drone platform. Not without teaching them how to fly the thing, which would take a long time to practice, lots of money, and will sharply limit the number of potential users.

This is where machine intelligence comes into the picture.

Autonomous control is virtually a must for publicly usable HTDs. Luckily, machine intelligence is making leaps and bounds forward, with autonomous (driverless) cars travelling the roads even today. If such autonomous systems can function for cars on the roads, why not do the same for drones in the air?

As things currently stand, all aerial drones will have to be controlled at least partly-autonomously, in order to prevent collisions with other drones. NASA is planning a “Traffic Management Convention” for drones, which could include tens of thousands of drones – and much more than that, if the need arises. The next logical step, therefore, is to include future HTDs into this future system, thus taking the control out of the pilot’s hands and transferring it completely to the vehicle and the system controlling it.

If the said system for managing aerial traffic becomes a reality, and assuming that drones capabilities are advanced enough to provide human transportation services, then autonomous HTDs for mass use will not be far behind.

The two last trends have covered the second barrier of inherent unsafety. The third trend I will present now deals with the first barrier of inefficient and wasteful use of energy.

Third Trend: Solar Energy

All small drones rely on electricity to function. Even a larger drone like the Ehang 184 that could be used for human transport, is powered by electricity, and can fly for 23 minutes before requiring a recharge. While 23 minutes may not sound like a lot of time, it’s more than enough for people to ‘hop’ from one side of most cities to the other, as long as there isn’t aerial congestion.

Of course, that’s the situation today. But batteries keep on improving. Elon Musk claims that by 2017, Tesla’s electric cars will have a 600 mile range on a single charge, for example. As batteries improve further, HTDs will be able to stay in the air for even longer periods of time, despite being powered by electricity alone. The adherence to electricity is important since in twenty years from now it is highly likely that we’ll have much cheaper electric energy coming directly from the sun.

Support for this argument comes from the exponential decline in the costs associated with producing and utilizing solar energy. Forty years ago, it would’ve cost about $75 to produce one watt of solar energy. Today the cost is less than a single dollar per watt. And as prices go down, the number of solar panels installation soars sky-high, roughly doubling itself every two years. Worldwide solar capacity in 2014 has been 53 times higher than in 2005.

solar panels.jpg
Credit: Earth Policy Institute / Bloomberg. Originally found on Treehugger.

If the rising trend of solar energy does not grind to a halt sometime in the next decade, then we will obtain much of our electric energy from the sun. We won’t have usable passenger solar airplanes – these need high-energy jet fuel to operate – but we will have solar panels pretty much everywhere: covering the sides and top of every building, and quite possibly every car as well. Buildings would both consume and produce energy. Much of the unneeded energy would be saved in batteries, or almost instantaneously diverted via the smart grid to other spots in the city where it’ll be needed.

If that is the face of the future – and the trends support this view – then HTDs could be an optimal way of transportation in the city of the future. Aerial drones could be deployed on tops of houses and skyscrapers, where they will be constantly charged by solar panels until they need to take a passenger to another house. Such a leap would only take 10-15 minutes, followed by a recharging period of 30 minutes or so. The entire system would operate autonomously – without human control or interference – and be powered by the sun.

 

Conclusions and Forecast for the Future

When can we expect this system to be deployed? Obviously it’s difficult to be certain about the future, particularly in cases where technological trends meet with societal, legal and political barriers to entry. Current culture will find it difficult to accept autonomous vehicles, and Big Fossil Fuel firms are still trying to pretend solar energy isn’t here to stay.

All the same, it seems that HTDs are already rearing their heads, with several inventors working separately to produce them. Their attempts are still extremely hesitant, but every attempt demonstrates the potential in HTDs and their viability for human transportation. I would therefore expect that in the next five years we will see demonstrations of HTDs (not for public use yet) that can carry individuals to a distance of at least one mile, and can be fully charged within one hour by solar panels alone. That is the easy forecast to make.

The more difficult forecast involves the use of autonomous aerial drones, the assimilation of HTDs into an overarching system that controls all the drones in a shared aerial space, and a mass-deployment of HTDs in a city. Each of these achievements needs to be made separately in order to fulfill the larger vision of a flying car to the masses. I am going to take a wild guess here, and suggest that if no Hindenburg-like disaster happens, then we’ll see real flying cars in our cities in twenty years from now – by the year 2035. It is likely that these HTDs will only be able to carry a single individual, and will probably be used more as a ‘flying taxi’ service between buildings to individual businessmen than a full-blown family flying car.

And then, finally, when people ask me where their flying car is, I will be able to provide a simple answer: “It’s parked on the roof.”