The Future of Genetic Engineering: Following the Eight Pathways of Technological Advancement

The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?

In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.

flash foresight

Eight Pathways of Technological Advancement

Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.

Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.

First Pathway: Dematerialization

By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.

Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.

Second Pathway: Virtualization

Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.

crash dummies
Thanks to virtualization, crash dummies everywhere can relax. Image originally from @TheCrashDummies.

Third Pathway: Mobility

Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.

Fourth Pathway: Product Intelligence

This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

Fifth Pathway: Networking

The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.

Sixth Pathway: Interactivity

As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.

Seventh Pathway: Globalization

Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.

Eighth Pathway: Convergence

Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.

 

How could genetic engineering progress along the Eight Pathways of technological improvement?

 

Pathways for Genetic Engineering

First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.

As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.

6.png
A screenshot from Genome Compiler

How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.

If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.

Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.

Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.

Well, then, what would a convergence of all the above pathways look like?

 

The Convergence of Genetic Engineering

Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.

What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.

 

Conclusion

If you’ve been reading this blog for some time, you may have noticed a recurring pattern: I’ll be inquiring into a certain subject, and then analyzing it according to a certain foresight methodology. Such posts have covered so far the Business Theory of Disruption (used to analyze the future of collectible card games), Causal Layered Analysis (used to analyze the future of aerial drones and of medical mistakes) and Pace Layer Thinking. I hope to go on giving you some orderly and proven methodologies that help thinking about the future.

How you actually use these methodologies in your business, class or salon talk – well, that’s up to you.

 

 

When the Marine Corps is Using Science Fiction to Prepare for the Future

When most of us think of the Marine Corps, we usually imagine sturdy soldiers charging headlong into battle, or carefully sniping at an enemy combatant from the tops of buildings. We probably don’t imagine them reading – or writing – science fiction. And yet, that’s exactly what 15 marines are about to do in two weeks from now.

The Marine Corps Warfighting Lab (I bet you didn’t know they have one) and The Atlantic Council are holding a Science Fiction Futures Workshop in early February. And guess what? They’re looking for “young, creative minds”. You probably have to be marines, but even if you aren’t – maybe you’ll have a chance if you submit your application as well.

marines futures workshop

 

Can Social Networks stop Ignorance (and Stupidity)?

Two days ago, the picture above was posted on Facebook by Tom Martindale –

Two things are immediately obvious:

  1. The ‘planet’ to the right is actually the moon with the United States stretched all over it;
  2. About two thousand people thought it was important enough to share this obvious hoax to their friends.

So – are there indeed two thousand people ignorant enough to share this message without realizing just how ridiculous it is? Isn’t that a reason to be worried about the state of the nation, about people’s education, and also to bemoan the tendency of social media to spread rumors far and wide without any criticism?

Not necessarily.

About two days ago, when the image was still fresh on Facebook and only gathered 500 shares, I took the liberty of going through all the “shares” of the picture that Facebook felt fit to show me. Altogether, I browsed through 86 “shares” – barely a fifth of the full number of people who shared the picture, but still a significant amount. I divided the shares into three categories-

  1. Identified the hoax: Shares by people who recognized the hoax, or that their friends explained to them about the hoax in their replies.
  2. Fooled by the hoax: Shares by people who explicitly mentioned that we were destroying the Earth, which I’m assuming means they thought the picture is authentic.
  3. Unknown: Shares by people who didn’t write anything about the picture, and whose friends did not reply either. We can’t know whether they shared the picture because they believe it is authentic, or because they wanted to have a good laugh about the hoax with their friends.

Care to guess how many people fell for the hoax?

The results are pretty clear. Out of the 86 shares, only one treated the picture explicitly as if it symbolized the destruction of the Earth. Of the other 85 shares, 40 dismissed the picture outright or had it dismissed for them by their friends, while the rest are unknown – they didn’t write anything about the picture in their share.

numberofshares.jpg

That’s actually very impressive. If we assume that the “shares” I counted reflect the overall distribution of shares, it means that for every person who fell victim to the hoax, we have forty people who identified it outright as a hoax, or had it explained to them immediately by their friends.

What can we learn from this (admittedly small) piece of data?

First, just because a certain image gets shared around the social networks, it doesn’t automatically mean that the sharers actually believe it is true or even worth reading. Many may be sharing it simply to ridicule others. I know this isn’t really a newsflash for all of you reading this post, but with everyone being so gloomy about the state of the nation’s ignorance and gullibility, it’s a good thing to keep in mind.

Second, while social networks are often rightly accused of spreading rumors, lies and misperceptions, it’s impossible to ignore their positive effects. Ignorant people can be found in every crowd, but they often don’t even know how ignorant they actually are. In the social network, it can be difficult to remain ignorant unless you’re doing so by choice. Whatever you share is open to debate, to criticism, to ridicule and to corrections by people who often know more and care more for the subject than you do.

Obviously, that’s not the end of the issue by far. Social networks can also be used to spread untruths of many kinds. In many issues, the loudest and most rabid voices are the most heard. If an alien from outer space would’ve logged into Facebook today, he would’ve figured that GMOs are hazardous to your health, vaccines cause autism, and marijuana cures cancer. At least two of the above are clearly and demonstrably false, and yet each conspiracy theory has gathered a large crowd of believers who will defend it online to their dying breath from any rational argument.

So: social networks – are they good or bad for public knowledge and understanding? That’s obviously a false dichotomy. Social networks work just like the agora – the gathering place where all the Greek citizens came together to discuss matters. They bring the agora to us, which means we’re going to get approached by many charlatans peddling their wares and beliefs, and also by the skeptics who are trying to warn us off. Social networks take away the loneliness of the individual, and turn us into a crowd – for good AND for bad at the same time.

Why “Magic: the Gathering” is Doomed: Lessons from the Business Theory of Disruption

Twenty years ago, when I was young and beautiful, I picked up a wrapped pack of cards in a computer games store, and read for the first time the tag “Magic: the Gathering”. That was the beginning of my long-time romance with the collectible card game. I imported the game to Israel, translated the rules leaflet to Hebrew for my friends, and went on to play semi-professionally for twenty years, up to the point when I became the Israeli champion. The game has pretty much shaped my years as a teenager, and has helped me make friends and meet interesting people from all over the world.

That is why it’s so sad to me to see the state of the game right now, and realize that it is almost certainly doomed to fail in the long run.

 

this-is-why-you-should-be-playing-magic-the-gathering-400903
Magic: The Gathering. The game that has bankrupt thousands of parents.

 

The Rise and Decline of Magic the Gathering

Make no mistake: Magic the Gathering (just Magic in short) is still the top dog among collectible card games in the physical world. According to a report released in 2014, the annual revenue from Magic has grown by 182% between 2009 and 2014, reaching a total value of around $250 million a year. That’s a lot of money, to be sure.

The only problem is that Hearthstone, a digital card game released in the beginning of 2014, has reached annual revenues of around $240 million, in less than two years. I will not be surprised to see the numbers growing even larger that in the future.

This is a bizarre situation. Wizards of the Coast (WotC), the company behind Magic, had twenty years to take the game online and turn it into a success. They failed miserably, and their meager attempts at became a target for scorn and ridicule from players worldwide. While WotC did create an online platform to play Magic on, there were plenty of complaints: for starters, playing was extremely costly since the virtual card packs generally cost the same as packs in the physical world. An evening of playing a draft – a small tournament with only eight players – would’ve cost each player around ten dollars, and would’ve required a time investment of up to four straight hours, much of it wasted in waiting for the other players in the tournament to finish their matches with each other and move on to the next round.

These issues meant that Magic Online was mostly reserved for the top players, who had the money and the willingness to spend it on the game. WotC was aware of the disgruntlement about the state of things, but chose to do nothing – after all, it had no real contenders in the physical or the digital market. What did it have to fear? It had no real reason to change. In fact, the only smart decision WotC managers could take was NOT to take a risk and try to change the online experience, but to keep on making money – and lots of it – from a game that functioned well enough. And they could continue doing so until their business was rudely and abruptly disrupted.

 

The Business Theory of Disruption

The theory of disruption was originally conceived by Harvard Business School professor Clayton M. Christensen, and described in his best-selling book The Innovator’s Dilemma. Christensen has followed the evolution of several industries, particularly hard drives, but also including metalworking, retail stores and tractors. He found out that in each sector, the managers supported research and development, but all that R&D produced only two general kinds of innovations: sustaining innovations and disruptive ones.

41lZYsjj6dL._SX331_BO1,204,203,200_

The sustaining innovations were generally those that the customers asked for: increasing hard drive storage capacity, or making data retrieval faster. They led to obvious improvements, which brought immediate and clear benefit to the company in a competitive market.

The disruptive innovations, on the other hand, were those that completely changed the picture, and actually had a good potential to cost the company money in the short-term. Furthermore, the customers saw little value in them, and so the managers saw no advantage in pursuing these innovations. The company-employed engineers who came up with the ideas for disruptive innovations simply couldn’t find support for them in the company.

A good example for the process of disruption is that of the hard drive industry, a few years before the transition from 8-inch drives to 5.25-inch drives occurred. A quick look at the following parameters of the two contenders, back in 1981, explains immediately why managers in the 8-inch drive manufacturing companies were wary of switching over to the 5.25-inch drive market. The 5.25-inch drives were simply inefficient, and lost the competition with 8-inch drives in almost every parameter, except for their size! And while size is obviously important, the computer market at the time consisted mainly of “minicomputers” – computers that cost ~$25,000, and were the size of a small refrigerator. At that size, the physical volume of the hard drives was simply irrelevant.

Attribute 8-Inch Drives (Minicomputer Market) 5.25-Inch Drives (Desktop Computer Market)
Capacity (megabytes) 60 10
Physical volume (cubic inches) 566 150
Weight (pounds) 21 6
Access time (milliseconds) 30 160
Cost per megabyte $50 $200
Unit cost $3,000 $2,000

The table has been copied from the book The Innovator’s Dilemma by Clayton M. Christensen.

And so, 8-inch drive companies continued to focus on 8-inch drives, while a few renegade engineers opened new companies and worked hard on developing better 5.25-inch drives. In a few years, the 5.25-inch drives were just as efficient as the 8-inch drives, and a new market formed: that of the personal desktop computer. Suddenly, every computer maker in the market needed 5.25-inch drives.

Pdp-11-40
One of the first minicomputers. On display at the Vienna Technical Museum. Image found on Wikipedia.

Now, the 8-inch drive company managers were far from stupid or ignorant. When they saw that there was a market for 5.25-inch drives, they decided to leap on the opportunity as well, and develop their own 5.25-inch drives. Sadly, they were too late. They discovered that it takes time and effort to become acquainted with the demands of the new market, to adapt their manufacturing machinery and to change the entire company’s workflow in order to produce and supply computer makers with 5.25 drives. They joined the competition far too late, and even though they were the leviathans of the industry just ten years ago, they soon sunk to the bottom and were driven out of business.

What happened to the engineers who drove forward the 5.25-inch drives revolution, you may ask? They became CEOs of the new 5.25-inch drive manufacturing companies. A few years later, when their own young engineers came to them and suggested that they invest in developing the new and faulty 3.5-inch drives, they decided that there was no market for this invention right now, no demand for it, and that it’s too inefficient anyway.

Care to guess what happened next? Ten years later, the 3.5-inch drives took over, portable computers utilizing them were everywhere, and the 5.25-inch drive companies crumbled away.

That is the essence of disruption: decisions that make sense in the present, are clearly incorrect in the long term, when markets change. Companies that relax and only invest in sustaining innovations instead of trying to radically change their products and reshape the markets themselves, are doomed to fail. In Peter Diamandis words –

“If you aren’t disrupting yourself, someone else is.”

Now that you understand the basics of the Theory of Disruption, let’s see how it applies to Magic.

 

Magic and Disruption

Wizards of the Coast has been making almost exclusively sustaining improvements over the last twenty years: its talented R&D team focused almost exclusively on releasing new expansions with new cards and new playing mechanics. WotC also tried to disrupt themselves once by creating the Magic Online platform, but failed to support and nurture this disruptive innovation. The online platform remained mainly as an outdated relic – a relic that made money, to be sure, but was slowly becoming irrelevant in the online world of collectible card games.

In the last five years, many other collectible card games reared their heads online, including minor successes like Shadow Era (200,000 players, ~$156,000 annual revenue) and Urban Rivals (estimated ~$140,000 annual revenue). Each of the above made discoveries in the online world: they realized that players need to be offered cards for free, that they need to be lured to play every day, and that the free-to-play model can still prove profitable since the company’s costs are close to zero: the firm doesn’t need to physically print new cards or to distribute them to retailers. But these upstarts were still so small that WotC could afford to effectively ignore them. They didn’t pose a real threat to Magic.

Then Hearthstone burst into existence in 2014, and everything changed.

unnamed.png

Hearthstone’s developers took the best traits of Magic and combined it with all the insights the online gaming industry has developed over recent years. They made the game essentially free to play to attract a large number of players, understanding that their revenues would come from the small fraction of players who spent some money on the game. They minimized time waste by setting a time limit on every player’s turn, and by establishing a rule that players can only act during their own turn (so there’s no need to wait for the other player’s response after every move). They even broke down the Magic draft tournaments of eight people, and made it so that every player who drafted a deck can now play against any other player who drafted a deck at any time. There’s no time waste in Hearthstone – just games to play and fun to be had.

WotC was still deep asleep at that time. In July 2014, Magic brand manager Liz Lamb-Ferro told GamesBeat that –

“If you’re looking for just that immediate face-to-face, back-and-forth action-based game with not a lot of depth to it, then you can find that. … But if you want the extras … then you’re eventually going to find your way to Magic.”

Lamb-Ferro was right – Hearthstone IS a simpler game – but that simplicity streamlines gameplay, and thus makes the game more rapid and enjoyable to many players. And even if we were to accept that Hearthstone does not attract veteran players who “want the extras” (actually, it does), WotC should have realized that other online collectible card games would soon combine Magic’s sophistication with Hearthstone’s mechanisms for streamlining gameplay. And indeed, in 2014 a new game – SolForge – has taken all of the strengths of Hearthstone, while adding a mechanic of card transformation (each card transforming into three different versions of itself) that could only have been possible in card games played online. SolForge doesn’t even have a physical version and could never have one, and the game is already costing Magic a few more veteran players.

This is the point when WotC began realizing that they’re falling far behind the curve. And so, in the middle of 2015 they have released Duels of the Planeswalkers 2016. I won’t even bother detailing all the infuriating problems with the game. Suffice it to say that it has garnered more negative reviews than positive ones, and made clear that WotC were still lagging far behind their competitors in their understanding of the virtual world, user experience, and what players actually want. In short, WotC found themselves in the position of the 8-inch drive manufacturers, realizing suddenly that the market has changed under their noses in less than two years.

 

What Could WotC do?

The sad truth is that WotC can probably do nothing right now to fix Magic. The firm can continue churning out sustaining improvements – new expansions and new exciting cards – but it will find itself hard pressed to take over the digital landscape. Magic is a game that was designed for the physical world, and not for the current frenzied pace of the virtual collectible card games. Magic simply isn’t suitable for the new market, unless WotC changes the rules so much that it’s no longer the same game.

Could WotC change the rules in such a dramatic fashion? Yes, but at a great cost. The company could recreate the game online with new cards and rules, but it would have to invest time and effort in relearning the workings of the virtual world and creating a new platform for the revised Magic. Unfortunately, it’s not clear that WotC will have time to do that with Hearthstone, SolForge and a horde of other card games snarling at its heels. The future of Magic online does not look bright, to say the least.

Does that mean Magic the Gathering will vanish completely? Probably not. The Magic brand is still strong everywhere except for the virtual world, which means that in the next five years the game will remain in existence mostly in the physical world, where it will bring much joy to children in school breaks, and much money to the pockets of WotC. During these five years, WotC will have the opportunity to rethink and recreate the game for the next big market: virtual and augmented reality. If the firm succeeds in that front, it’s possible that Magic will be reinvented for the new-new market. If it fails and elects to keep the game anchored only in the physical world, then Magic will slowly but surely vanish away as the market changes and new and exciting games take over the attention span of the next generation.

That’s what happens when you disregard the Theory of Disruption.

 

Forecast: Flying Cars by 2035

Whenever a futurist talks about the future and lays out all the dazzling wealth technological advancements hold in store for us, there is one question that is always asked by the audience.

“Where is that flying car you promised me?”

Well, we may be drawing near to a future of flying cars. While the road to that future may still be long and arduous, I’m willing to forecast that in twenty years from now we will have flying cars for use by civilians – but only if three technological and societal conditions will be fulfilled by that time.

In order to understand these conditions, let us first examine briefly the history of flying cars, and understand the reasons behind their absence in the present.

 

Flying Cars from the Past

Surprising as it may be, the concept of flying cars has been around far longer than the Back to the Future trilogy. Henry Ford himself had produced in 1926 a rudimentary and experimental ‘flying car’, although really it was more of a mini-airplane for the average American consumer. Despite the excitement from the public, the idea crashed and burned in two years, together with the prototype and its test pilot.

skycar10.jpg
One of the forgotten historical flying cars. A prototype of the Ave Mizar.

Since the 1920s, it seems like innovators and inventors came up with flying cars almost once a decade. You can see pictures of some of these cars in Popular Mechanics’ gallery. Some crashed and burned, in the tradition set by Ford. Others managed to soar sky high. None actually made it to mass production, for two main reasons:

  • Extremely wasteful: flying cars are extremely wasteful in terms of fuel consumption. Their energy efficiency is abysmal when compared to that of high-altitude and high-speed airplanes.
  • Extremely unsafe: let’s be honest for a moment, OK? You give people cars that can drive in what is essentially a one-dimensional road, and what do they do? They make traffic accidents. What do you think would happen if you gave everyone the ability to drive a car in three dimensions? Crash, crash and burn all over again. For flying cars to become widely used in society, everyone needs to take flying lessons. Good luck with that.

These two limitations together made sure that flying cars to the masses were left a fantasy – and still largely are. In fact, I would go as far as saying that any new concept or prototype of a flying car that does not take these challenges into account, is only presented to the public as a ‘flying car’ as a publicity stunt.

But now, things are beginning to change, because of three trends that together will provide answers to the main barriers standing in the way of flying cars.

 

The Three Trends that will Enable Flying Cars

There are three trends that, combined, will enable the use of flying cars by the public within twenty years.

First Trend: Massive Improvement in Aerial Drones Capabilities

If you visit your city’s playgrounds, you may find children there having fun flying drones around. The drones they’re using – which often cost less than $200 – would’ve considered highly sophisticated weapons of war just twenty years ago, and would’ve been sold by arms manufactures at prices in the order of millions of dollars.

bendboydrone.jpg
14 years old Morgan Tien with his drone. Source: Bend Bulletin

Dr. Peter Diamandis, innovator, billionaire and futurist, has written in 2014 about the massive improvement in capabilities of aerial drones. Briefly, current-day drones are a product of exponential improvement in computing elements (inertial measurement units), communications (GPS receivers and system), and even sensors (digital cameras). All of the above – at their current sizes and prices – would not have been available even ten years ago.

Aerial drones are important for many reasons, not least because they may yet serve as the basis for a flying car. Innovators, makers and even firms today are beginning to strap together several drones, and turn them into a flying platform that can carry individuals around.

The most striking example of this kind comes from a Canadian inventor who has recently flown 275 meters on a drone platform he has basically fashioned in his garage.

Another, a more cumbersome version of Human-Transportation Drones (Let’s call them HTD from now on, shall we?) was demonstrated this week at the Las Vegas Convention Center. It is essentially a tiny helicopter with four double-propellers attached, much like a large drone. It has place for just one traveler, and can fly up to 23 minutes according to the manufacturers. Most importantly, the Ehang 184 as it’s called is supposed to be autonomous, which brings us straight to the next trend: the rise of machine intelligence.

ehang-184-aav-passenger-drone-12.jpeg
Ehang 184. Credit: Ehang. Originally found on Gizmag.

Second Trend: Machine Intelligence and Flying Cars

There can be little question that drones will keep on improving in their capabilities. We will improve our understanding of the science and technology behind aerial drones, and develop more efficient tools for aerial travel, including some that will carry people around. But will these tools be available for mass-use?

This is where the safety barrier comes into the picture. You can’t let the ordinary Joe Shmoe control a vehicle like the Ehang 184, or even a light-weight drone platform. Not without teaching them how to fly the thing, which would take a long time to practice, lots of money, and will sharply limit the number of potential users.

This is where machine intelligence comes into the picture.

Autonomous control is virtually a must for publicly usable HTDs. Luckily, machine intelligence is making leaps and bounds forward, with autonomous (driverless) cars travelling the roads even today. If such autonomous systems can function for cars on the roads, why not do the same for drones in the air?

As things currently stand, all aerial drones will have to be controlled at least partly-autonomously, in order to prevent collisions with other drones. NASA is planning a “Traffic Management Convention” for drones, which could include tens of thousands of drones – and much more than that, if the need arises. The next logical step, therefore, is to include future HTDs into this future system, thus taking the control out of the pilot’s hands and transferring it completely to the vehicle and the system controlling it.

If the said system for managing aerial traffic becomes a reality, and assuming that drones capabilities are advanced enough to provide human transportation services, then autonomous HTDs for mass use will not be far behind.

The two last trends have covered the second barrier of inherent unsafety. The third trend I will present now deals with the first barrier of inefficient and wasteful use of energy.

Third Trend: Solar Energy

All small drones rely on electricity to function. Even a larger drone like the Ehang 184 that could be used for human transport, is powered by electricity, and can fly for 23 minutes before requiring a recharge. While 23 minutes may not sound like a lot of time, it’s more than enough for people to ‘hop’ from one side of most cities to the other, as long as there isn’t aerial congestion.

Of course, that’s the situation today. But batteries keep on improving. Elon Musk claims that by 2017, Tesla’s electric cars will have a 600 mile range on a single charge, for example. As batteries improve further, HTDs will be able to stay in the air for even longer periods of time, despite being powered by electricity alone. The adherence to electricity is important since in twenty years from now it is highly likely that we’ll have much cheaper electric energy coming directly from the sun.

Support for this argument comes from the exponential decline in the costs associated with producing and utilizing solar energy. Forty years ago, it would’ve cost about $75 to produce one watt of solar energy. Today the cost is less than a single dollar per watt. And as prices go down, the number of solar panels installation soars sky-high, roughly doubling itself every two years. Worldwide solar capacity in 2014 has been 53 times higher than in 2005.

solar panels.jpg
Credit: Earth Policy Institute / Bloomberg. Originally found on Treehugger.

If the rising trend of solar energy does not grind to a halt sometime in the next decade, then we will obtain much of our electric energy from the sun. We won’t have usable passenger solar airplanes – these need high-energy jet fuel to operate – but we will have solar panels pretty much everywhere: covering the sides and top of every building, and quite possibly every car as well. Buildings would both consume and produce energy. Much of the unneeded energy would be saved in batteries, or almost instantaneously diverted via the smart grid to other spots in the city where it’ll be needed.

If that is the face of the future – and the trends support this view – then HTDs could be an optimal way of transportation in the city of the future. Aerial drones could be deployed on tops of houses and skyscrapers, where they will be constantly charged by solar panels until they need to take a passenger to another house. Such a leap would only take 10-15 minutes, followed by a recharging period of 30 minutes or so. The entire system would operate autonomously – without human control or interference – and be powered by the sun.

 

Conclusions and Forecast for the Future

When can we expect this system to be deployed? Obviously it’s difficult to be certain about the future, particularly in cases where technological trends meet with societal, legal and political barriers to entry. Current culture will find it difficult to accept autonomous vehicles, and Big Fossil Fuel firms are still trying to pretend solar energy isn’t here to stay.

All the same, it seems that HTDs are already rearing their heads, with several inventors working separately to produce them. Their attempts are still extremely hesitant, but every attempt demonstrates the potential in HTDs and their viability for human transportation. I would therefore expect that in the next five years we will see demonstrations of HTDs (not for public use yet) that can carry individuals to a distance of at least one mile, and can be fully charged within one hour by solar panels alone. That is the easy forecast to make.

The more difficult forecast involves the use of autonomous aerial drones, the assimilation of HTDs into an overarching system that controls all the drones in a shared aerial space, and a mass-deployment of HTDs in a city. Each of these achievements needs to be made separately in order to fulfill the larger vision of a flying car to the masses. I am going to take a wild guess here, and suggest that if no Hindenburg-like disaster happens, then we’ll see real flying cars in our cities in twenty years from now – by the year 2035. It is likely that these HTDs will only be able to carry a single individual, and will probably be used more as a ‘flying taxi’ service between buildings to individual businessmen than a full-blown family flying car.

And then, finally, when people ask me where their flying car is, I will be able to provide a simple answer: “It’s parked on the roof.”

Four Robot Myths it’s Time We Let Go of

A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.

As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!

12489892_10206643949390298_612958140_o.jpg
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.

 

First Myth: Robots must be shaped as Humanoids

Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.

Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.

telepresence_options_robots.png
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.

 

Second Myth: Robots have a Computer for a Brain

This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.

The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.

Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.

 

This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.

 

Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.

 

Third Myth: Robots as Individual Units

This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.

Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.

The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –

“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”

tesla-model-x-elon-musk.jpg
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.

Fourth Myth: Robots can’t make Moral Decisions

In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.

Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?

These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.

And so, robots must make moral decisions.

 

Conclusion

As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?

This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?

That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!

 

Forecast for 2016: The Year of the Data Race; or – How Our Politicians Will Mess with Our Minds in 2016

Almost four years ago, the presidential elections took place in the United States. Barack Obama competed against Mitt Romney in the race for the White House. Both candidates delivered inspiring speeches, appeared in every institute that would accept their presence, and employed hundreds of paid consultants and volunteers who advertised them throughout the nation. In the end, Obama won the race for the presidency, possibly because of his opinions and ideas… or because of his reliance on data scientists. In fact, as Sasha Issenberg’s article of the 2012 elections in MIT Technology Review describes –

“Romney’s data science team was less than one-tenth the size of Obama’s analytics department.”

How did Obama utilize all of those data scientists?

 

Barack_Obama_Inauguration_Oath.jpg

Analyzing the Individual Voter

Up to 2012, individual voters were analyzed according to a relatively simplistic system which only took into account very limited parameters such as age, place of living, etc. The messages those potential voters received to their phones, physical mailboxes and virtual inboxes were customized according to these parameters. Obama’s team of data scientists expanded the list of parameters into tens of different parameters and criteria. They then utilized a system in which customized messages were mailed to certain representative voters, who were later surveyed so that the scientists could figure out how their opinions changed according to the structure of the messages sent.

This level of analysis and understanding of the individual voters and the messages that helped them change their opinions aided Obama in delivering the right messages, at the right time, to the persuadable people. If the term “persuadable” strikes you as sinister, as if Obama’s team were preying on the weak of mind or those sitting on the fence, you should be aware that it was used by Terry Walsh, who coordinated Obama’s campaign’s polling and paid-media spending.

Of course, being a “persuadable” voter does not mean that you’re a helpless dummy. Rather, it just means that you’re still uncertain which way to turn. But when political parties can find those undecided voters, focus on them and analyze each one with the most sophisticated computer models available to find out all about their levers and buttons, how much free choice does that leave those people?

I could go on describing other strategies utilized by Obama’s team in the 2012 elections. They identified voters who were likely to ‘switch sides’ following just one phone call, and had about 500,000 conversations with those voters. They supplied to a data collection firm the addresses of many “easily persuadable” voters, and received in return the records of TV watching in those households. That way, the campaign team could maximize the efficiency of TV advertisements – fitting them to the right time, in the right channels, and in the right destinations. All of the above is well recorded, and described in Issenberg’s article and other resources (like this, that, and others).

 

The Republican Drowning Whale

Obama wasn’t the only one to utilize big data and predictive analytics in the 2012 campaign. His opponent, Mitt Romney, had a team of data scientists of his own. Unfortunately for Romney, his team didn’t even come close to the level of operations of Obama’s team. Romney’s team invested much of its effort in an app named Orca, which was supposed to indicate which of the expected republican voters actually turned up to vote – and to send messages to the republican slackers and encourage them to haul their tucheses to the voting booths. In practice, the app was horribly conceived, and crashed numerous times during Election Day, leading to utter confusion about the goings on.

 

pack_romney_away1.jpg
Mitt Romney being packed up after the massive failure of the Orca system in the 2012 predisential elections. Image originally from Phil Ebersole’s blog.

Regardless of the success of the Democrats data systems vs. the Republicans’ ones, one thing is clear: both parties are going to use big data and predictive analytics in the upcoming 2016 elections. In fact, we are going into a very interesting stage in the history of the 21st century: the Data Race.

 

From Space to Data

The period in time known as the Space Race took place in the 1960s, when the United States competed against the Soviet Union in a race to space. As a result of the Space Race, space launch technologies developed and made progress in leaps and bounds, with both countries fighting to demonstrate their superior science and technology. Great need – and great budgets – produce great results quickly.

In 2016, we will see a new kind of race starting – the Data Race. In 2012 it wasn’t really a race. The Democrats basically stepped on the Republicans. In 2016, however, the real Data Race in politics will be on: The Democrats will gather their teams of data scientists once more, and build up on the piles of data that were gathered in the 2012 elections and since then. The Republicans – possibly Trump with his self-funded election campaign – will learn from their mistakes in 2012, hire the best data scientists they can find, and utilize methodologies similar or better than those developed by the Democrats.

In short, both parties will find themselves in the midst of a Data Race, striving to obtain as much data as they can about the American citizen, about our lifestyles, habits, choices and any other tidbit of information that can be used to understand the individual voter – and how best to approach him or here and convert him to the party’s point of view. The data gathering and analysis systems will cost a lot, obviously, but since recent rulings in America allow larger contributions to be made to political candidates, money should not be a problem.

 

Conclusion: Where are We Heading?

It’s quite obvious that both American parties in 2016 are going to compete in a Data Race between them. The bigger questions is whether we should even allow them to do it so freely. Democracy, after all, is based on the assumption that every person can make his or her own mind and decisions. Do we really honor that core assumption, when political candidates can analyze human beings with the power of super-computers, big data and predictive analytics? Can an individual citizen truly choose freely, when powers on both sides are pulling and pushing at that individual’s levers and buttons, with methods tested and proven on millions of similarly-minded individuals?

Using predictive analytics in politics holds an inherent threat to democracy: by understanding each individual, we can also devise approaches and methodologies to influence every individual with maximal efficiency. This approach has the potential to turn most individuals into mere puppets in the hands of the powerful and the affluent.

Does that mean we should refrain from using big data and predictive analytics in politics? Of course not – but we can regulate its use so that instead of campaign managers focusing their efforts on the “easily persuadable”, they will use the data gleaned from the public to understand people’s real concerns and work to address them. We should all hope our politicians are heading in that direction, and if they aren’t – we should give them a shove towards it.