Did Tesla Break into Cars? or – Are We Witnessing a Decline in Private Ownership?

Jason Hughes is a white hat hacker – a ‘good’ hacker, working diligently to discover and identify ways in which existing systems can be hacked into. During one of his most recent forays, as described in TeslaRati he analyzed a “series of alphanumeric characters found embedded within Tesla’s most recent firmware 7.1”. According to Hughes, the update included the badges for the upcoming new Tesla model, the P100D. Hughes tweeted about this development to Tesla and to the public, and went happily to sleep.

And then things got weird.

According to Hughes, Tesla has attempted to access his car’s computer and significantly downgrade the firmware, assumedly in order to delete the information about the new model. Hughes managed to stop the incursion in the nick of time, and tweeted angrily about the event. Elon Musk, CEO of Tesla, tweeted back that he had nothing to do with it, and seemingly that’s the end of the story. Hughes is now cool with Musk, and everybody is happy again.

tesla tweet 2

But what can this incident tell us about the future of private ownership?

 

A Decline in Private Ownership?

One of Paul Saffo’s rules for effective forecasting is to “embrace the things that don’t fit”. Curious stories and anecdotes from the present can give us clues about the shape of the future. The above story seems to be a rather important clue about the shape of things to come, and about a future where personal ownership of any networked device conflict with the interests of the original manufacturer.

Tesla may or may not have a legal justification to alter the firmware installed in Hughes’ car. If you want to be generous, you can even assume that the system asked Hughes for permission to ‘update’ (actually downgrade) his firmware. Hughes was tech-savvy enough to understand the full meaning of such an update. But how many of us are in possession of such knowledge? In effect, and if Hughes is telling the truth, it turns out that Tesla attempted to alter Hughes’ car properties and functions to prevent damages to the company itself.

Of course, this is not the first incident of the kind. Seven years ago, Amazon has chosen to reach remotely into many Kindle devices held and owned by private citizens, and to delete some digital books in those devices. The books that were deleted? In a bizarre twist of fate they’re George Orwell’s books – 1984 and Animal Farm – with the first book describing a dystopian society in which the citizen has almost no power over his life. In 1984, the government has all the power. In 2016, it’s starting to seem that much of this power belongs to the big IT companies that can remotely reprogram the devices they sell us.

20090717-t3722tnq7c2dqs2sk459g7mgdn.jpg
Image originally from Engadget.

 

The Legal Side

I’m not saying that remote updates are bad for you. On the contrary: remote updates and upgrades of system are one of the reasons for the increasing rate of technological progress. Because of virtual upgrades, smartphones, computers and even cars no longer need to be brought physically to service stations to be upgraded. However, these two episodes are a good reminder for us that by giving the IT companies leeway into our devices, we are opening ourselves to their needs – which may not always be in parallel with our own.

I have not been able to find any legal analysis of Hughes’ and Tesla’s case, but I suspect if the case is ever being brought to court then Tesla might have to answer some difficult questions. The most important question would probably be whether the company even bothered to ask Hughes for permission to make a change in his property. If Tesla did not even do that, let them be penalized harshly, to prevent other companies from following in their footsteps.

Obviously, this is not a trend yet. I can’t just take two separate cases and cluster them together. However, the mechanism behind both incidents is virtually the same: because of the everpresent connectivity, the original manufacturers retain some control over the devices owned by end-users. Connectivity is just going to proliferate in the near future, and therefore we should keep a watchful eye for similar cases.

 

Conclusions

This is a new ground we’re travelling and testing. Never before could upgrades to physical user-owned devices be implemented so easily, to the benefit of most users – but possibly also for the detriment of some. We need to draw clear rules for how firms can access our devices and under what pretense. These rules, restrictions and laws will become clearer as we move into the future, and it’s up for the public to keep close scrutiny on lawmakers and make sure that the industry does not take over the private ownership of end-user devices.

Oh, and Microsoft? Please stop repeatedly asking me to upgrade to Windows 10. For the 74th time, I still don’t want to. And yes, I counted. Get the hint, won’t ya?

 

Advertisements

Science Just Wants To Be Free

This article was originally published in the Huffington Post

 

For a long time now, scientists were held in thrall by publishers. They worked voluntarily – without getting any pay – as editors and reviewers for the publishers, and they allowed their research to be published in scientific journals without receiving anything out of it. No wonder that scientific publishing had been considered a lucrative business.

Well, that’s no longer the case. Now, scientific publishers are struggling to maintain their stranglehold over scientists. If they succeed, science and the pace of progress will take a hit. Luckily, the entire scientific landscape is turning on them – but a little support from the public will go a long way in ensuring the eventual downfall of an institute that is no longer relevant or useful for society.

To understand why things are changing, we need to look back in history to 1665, when the British Royal Society began publishing research results in a journal form called Philosophical Transactions of the Royal Society. Since the number of pages available in each issue was limited, the editors could only pick the most interesting and credible papers to appear in the journal. As a result, scientists from all over Britain fought to have their research published in the journal, and any scientist whose research was published in an issue gained immediate recognition throughout Britain. Scientists were even willing to become editors for scientific journals, since that was a position that demanded request – and provided them power to push their views and agendas in science.

Thus was the deal struck between scientific publishers and scientists: the journals provided a platform for the scientists to present their research, and the scientists fought tooth and nail to have their papers accepted into the journals – often paying from their pockets for it to happen. The journals publishers then had full copyrights over the papers, to ensure that the same paper would not be published in a competing journal.

That, at least, was the old way for publishing scientific research. The reason that the journal publishers were so successful in the 20th century was that they acted as aggregators and selectors of knowledge. They employed the best scientists in the world as editors (almost always for free) to select the best papers, and they aggregated together all the necessary publishing processes in one place.

And then the internet appeared, along with a host of other automated processes that let every scientist publish and disseminate a new paper with minimal effort. Suddenly, publishing a new scientific paper and making the scientific community aware of it, could have a radical new price tag: it could be completely free.

Free Science

Let’s go through the process of publishing a research paper, and see how easy and effortless it became:

  1. The scientist sends the paper to the journal: Can now be conducted easily through the internet, with no cost for mail delivery.
  2. The paper is rerouted to the editor dealing with the paper’s topic: This is done automatically, since the authors specify certain keywords which make sure the right editor gets the paper automatically to her e-mail. Since the editor is actually a scientist volunteering to do the work for the publisher, there’s no cost attached anyway. Neither is there need for a human secretary to spend time and effort on cataloguing papers and sending them to editors manually.
  3. The editor sends the paper to specific scientific reviewers: All the reviewers are working for free, so the publishers don’t spend any money there either.

Let’s assume that the paper was confirmed, and is going to appear in the journal. Now the publisher must:

  1. Paginate, proofread, typeset, and ensure the use of proper graphics in the paper: These tasks are now performed nearly automatically using word processing programs, and are usually handled by the original authors of the paper.
  2. Print and distribute the journal: This is the only step that costs actual money by necessity, since it is performed in the physical world, and atoms are notoriously more expensive than bits. But do we even need this step anymore? I have been walking around in the corridors of the academy for more than ten years, and I’ve yet to see a scientist with his nose buried in a printed journal. Instead, scientists are reading the papers on their computer screens, or printing them in their offices. The mass-printed version is almost completely redundant. There is simply no need for it.

In conclusion, it’s easy to see that while the publishers served an important role in science a few decades ago, they are just not necessary today. The above steps can easily be conducted by community-managed sites like Arxive, and even the selection process of high quality papers can be performed today by the scientist themselves, in forums like Faculty of 1000.

The publishers have become redundant. But worse than that: they are damaging the progress of science and technology.

The New Producers of Knowledge

In a few years from now, the producers of knowledge will not be human scientists but computer programs and algorithms. Programs like IBM’s Watson will skim through hundreds of thousands of research papers and derive new meanings and insights from them. This would be an entirely new field of scientific research: retrospective research.

Computerized retrospective research is happening right now. A new model in developmental biology, for example, was discovered by an artificial intelligence engine that went over just 16 experiments published in the past. Imagine what would happen when AI algorithms cross and match together thousands papers from different disciplines, and come up with new theories and models that are supported by the research of thousands of scientists from the past!

For that to happen, however, the programs need to be able to go over the vast number of research papers out there, most of which are copyrighted, and held in the hands of the publishers.

You may say this is not a real problem. After all, IBM and other large data companies can easily cover the millions of dollars which the publishers will demand annually for access to the scientific content. What will the academic researchers do, though? Many of them do not enjoy the backing of the big industry, and will not have access to scientific data from the past. Even top academic institutes like Harvard University find themselves hard-pressed to cover the annual costs demanded by the publishers for accessing papers from the past.

Many ventures for using this data are based on the assumption that information is essentially free. We know that Google is wary of uploading scanned books from the last few decades, even if these books are no longer in circulation. Google doesn’t want to be sued by the copyrights holders – and thus is waiting for the copyrights to expire before it uploads the entire book – and lets the public enjoy it for free. So many free projects could be conducted to derive scientific insights from literally millions of research papers from the past. Are we really going to wait for nearly a hundred years before we can use all that knowledge? Knowledge, I should mention, that was gathered by scientists funded by the public – and should thus remain in the hands of the public.

 

What Can We Do?

Scientific publishers are slowly dying, while free publication and open access to papers are becoming the norm. The process of transition, though, is going to take a long time still, and provides no easy and immediate solution for all those millions of research papers from the last century. What can we do about them?

Here’s one proposal. It’s radical, but it highlights one possible way of action: have the government, or an international coalition of governments, purchase the copyrights for all copyrighted scientific papers, and open them to the public. The venture will cost a few billion dollars, true, but it will only have to occur once for the entire scientific publishing field to change its face. It will set to right the ancient wrong of hiding research under paywalls. That wrong was necessary in the past when we needed the publishers, but now there is simply no justification for it. Most importantly, this move will mean that science can accelerate its pace by easily relying on the roots cultivated by past generations of scientists.

If governments don’t do that, the public will. Already we see the rise of websites like Sci-Hub, which provide free (i.e. pirated) access to more than 47 million research papers. Having been persecuted by both the publishers and the government, Sci-Hub has just recently been forced to move to the Darknet, which is the dark and anonymous section of the internet. Scientists who will want to browse through past research results – that were almost entirely paid for by the public – will thus have to move over to the Darknet, which is where weapon smugglers, pedophiles and drug dealers lurk today. That’s a sad turn of events that should make you think. Just be careful not to sell your thoughts to the scholarly publishers, or they may never see the light of day.

 

Dr Roey Tzezana is a senior analyst at Wikistrat, an academic manager of foresight courses at Tel Aviv University, blogger at Curating The Future, the director of the Simpolitix project for political forecasting, and founder of TeleBuddy.

Images of Israeli War Machines from 2048

Do you want to know what war would look like in 2048? The Israeli artist Pavel Postovit has drawn a series of remarkable images depicting soldiers, robots and mechs – all in the service of the Israeli army in 2048. He even drew aerial ships resembling the infamous Triskelion from The Avengers (which had an unfortunate tendency to crash every second week or so).

Pavel is not the first artist to make an attempt to envision the future of war. Jakub Rozalski before him tried to reimagine World War II with robots, and Simon Stalenhag has many drawings that demonstrate what warfare could look like in the future. Their drawings, obviously, are a way to forecast possible futures and bring them to our attention.

Pavel’s drawings may not based on rigorous foresight research, but they don’t have to be. They are mainly focused on showing us one way the future may be unfurled. Pavel himself does not pretend to be a futures researcher, and told me that –

“I was influenced by all kind of different things – Elysium, District 9 [both are sci-fi movies from the last few years], and from my military service. I was in field intelligence, on the border with Syria, and was constantly exposed to all kinds of weapons, both ours and the Syrians.”

Here are a couple of drawings to make you understand Pavel’s vision of the future, divided according to categories I added. Be aware that the last picture is the most haunting of all.

 

Mechs in the Battlefield

Mechs are a form of ground vehicles with legs – much like Boston Dymanic’s Alpha Dog, which they are presumbaly based on. The most innovative of those mechs is the DreamCatcher – a unit with arms and hands that is used to collect “biological intelligence in hostile territory”. In one particularly disturbing image we can see why it’s called “DreamCatcher”, as the mech beheads a deceased human fighter and takes the head for inspection.

b93e7f27692961.5636946fc1475.jpg

Apparently, mechs in Pavel’s future are working almost autonomously – they can reach hostile areas on the battlefield and carry out complicated tasks on their own.

 

Soldiers and Aerial Drones

Soldiers in the field will be companied by aerial drones. Some of the drones will be larger than others – the Tinkerbell, for example, can serve both for recon and personal CAS (Close Air Support) for the individual soldier.

97d79927684283.5636910467ed2.jpg

Other aerial drones will be much smaller, and will be deployed as a swarm. The Blackmoth, for example, is a swarm of stealthy micro-UAVs used to gather tactical intelligence on the battlefield.

f4bb2a27684283.5636947973985.jpg

 

Technology vs. Simplicity

Throughout Pavel’s visions of the future we can see a repeated pattern: the technological prowess of the west is going to collide with the simple lifestyle of natives. Since the images depict the Israeli army, it’s obvious why the machines are essentially fighting or constraining the Palestinians. You can see in the images below what life might look like in 2048 for Arab civillians and combatants.

471c3e27692961.56369472000a8.jpg

Another interesting picture shows Arab combatants dealing with a heavily armed combat mech by trying to make it lose its balance. At the same time, one of the combatants is sitting to the side with a laptop – presumbaly trying to hack into the robot.

431d1327692961.5636946fd2add.jpg

 

The Last Image

If the images above have made you feel somewhat shaken, don’t worry – it’s perfectly normal. You’re seeing here a new kind of warfare, in which robots take extremely active parts against human beings. That’s war for you: brutal and horrible, and there’s nothing much to do against that. If robots can actually minimize the amount of suffering on the battlefield by replacing soldiers, and by carrying out tasks with minimal casualties for both sides – it might actually be better than the human-based model of war.

Perhaps that is why I find the last picture the most horrendous one. You can see in it a combatant, presumably an Arab, with a bloody machette next to him and two prisoners that he’s holding in a cage. The combatant is reading a James Bond book. The symbolism is clear: this is the new kind of terrorist / combatant. He is vicious, ruthless, and well-educated in Western culture – at least well enough to develop his own ideas for using technology to carry out his ideology. In other words, this is an ISIS combatant, who begin to employ some of the technologies of the West like aerial drones, without adhering to moral theories that restrict their use by nations.

ba9a0c31030769.563dbe5189ce8.jpg

 

Conclusion

The future of warfare in Pavel’s vision is beginning to leave the paradigm of human-on-human action, and is rapidly moving into robotic warfare. It is very difficult to think of a military future that does not include robots in it, and obviously we should start thinking right now about the consequences, and how (and whether) we can imbue robots with sufficient autonomous capabilities to carry out missions on their own, while still minimizing casualties on the enemy side.

You can check out the rest of Pavel’s (highly recommended) drawings in THIS LINK.

The Future of Genetic Engineering: Following the Eight Pathways of Technological Advancement

The future of genetic engineering at the moment is a mystery to everyone. The concept of reprogramming life is an oh-so-cool idea, but it is mostly being used nowadays in the most sophisticated labs. How will genetic engineering change in the future, though? Who will use it? And how?

In an attempt to provide a starting point to a discussion, I’ve analyzed the issue according to Daniel Burrus’ “Eight Pathways of Technological Advancement”, found in his book Flash Foresight. While the book provides more insights about creativity and business skills than about foresight, it does contain some interesting gems like the Eight Pathways. I’ve led workshops in the past, where I taught chief executives how to use this methodology to gain insights about the future of their products, and it had been a great success. So in this post we’ll try applying it for genetic engineering – and we’ll see what comes out.

flash foresight

Eight Pathways of Technological Advancement

Make no mistake: technology does not “want” to advance or to improve. There is no law of nature dictating that technology will advance, or in what direction. Human beings improve technology, generation after generation, to better solve their problems and make their lives easier. Since we roughly understand humans and their needs and wants, we can often identify how technologies will improve in order to answer those needs. The Eight Pathways of Technological Advancement, therefore, are generally those that adapt technology to our needs.

Let’s go briefly over the pathways, one by one. If you want a better understanding and more elaborate explanations, I suggest you read the full Flash Foresight book.

First Pathway: Dematerialization

By dematerialization we mean literally to remove atoms from the product, leading directly to its miniaturization. Cellular phones, for example, have become much smaller over the years, as did computers, data storage devices and generally any tool that humans wanted to make more efficient.

Of course, not every product undergoes dematerialization. Even if we were to minimize cars’ engines, they would still stay large enough to hold at least one passenger comfortably. So we need to take into account that the device should still be able to fulfil its original purpose.

Second Pathway: Virtualization

Virtualization means that we take certain processes and products that currently exist or are being conducted in the physical world, and transfer them fully or partially into the virtual world. In the virtual world, processes are generally streamlined, and products have almost no cost. For example, modern car companies take as little as 12 months to release a new car model to market. How can engineers complete the design, modeling and safety testing of such complicated models in less than a year? They’re simply using virtualized simulation and modeling tools to design the cars, up to the point when they’re crashing virtual cars with virtual crash dummies in them into virtual walls to gain insights about their (physical) safety.

crash dummies
Thanks to virtualization, crash dummies everywhere can relax. Image originally from @TheCrashDummies.

Third Pathway: Mobility

Human beings invent technology to help them fulfill certain needs and take care of their woes. Once that technology is invented, it’s obvious that they would like to enjoy it everywhere they go, at any time. That is why technologies become more mobile as the years go by: in the past, people could only speak on the phone from the post office; today, wireless phones can be used anywhere, anytime. Similarly, cloud computing enables us to work on every computer as though it were our own, by utilizing cloud applications like Gmail, Dropbox, and others.

Fourth Pathway: Product Intelligence

This pathway does not need much of an explanation: we experience its results every day. Whenever our GPS navigation system speaks up in our car, we are reminded of the artificial intelligence engines that help us in our lives. As Kevin Kelly wrote in his WIRED piece in 2014 – “There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ.”

Fifth Pathway: Networking

The power of networking – connecting between people and items – becomes clear in our modern age: Napster was the result of networking; torrents are the result of networking; even bitcoin and blockchain technology are manifestations of networking. Since products and services can gain so much from being connected between users, many of them take this pathway into the future.

Sixth Pathway: Interactivity

As products gain intelligence of their own, they also become more interactive. Google completes our search phrases for us; Amazon is suggesting for us the products we should desire according to our past purchases. These service providers are interacting with us automatically, to provide a better service for the individual, instead of catering to some averaging of the masses.

Seventh Pathway: Globalization

Networking means that we can make connections all over the world, and as a result – products and services become global. Crowdfunding firms like Kickstarter, that suddenly enable local businesses to gain support from the global community, are a great example for globalization. Small firms can find themselves capable of catering to a global market thanks to improvements in mail delivery systems – like a company that delivers socks monthly – and that is another example of globalization.

Eighth Pathway: Convergence

Industries are converging, and so are services and products. The iPhone is a convergence of a cellular phone, a computer, a touch screen, a GPS receiver, a camera, and several other products that have come together to create a unique device. Similarly, modern aerial drones could also be considered a result of the convergence pathway: a camera, a GPS receiver, an inertia measurement unit, and a few propellers to carry the entire unit in the air. All of the above are useful on their own, but together they create a product that is much more than the sum of their parts.

 

How could genetic engineering progress along the Eight Pathways of technological improvement?

 

Pathways for Genetic Engineering

First, it’s safe to assume that genetic engineering as a practice would require less space and tools to conduct (Dematerializing genetic engineering). That is hardly surprising, since biotechnology companies are constantly releasing new kits and appliances that streamline, simplify and add efficiency to lab work. This criteria also answers the need for mobility (the third pathway), since it means complicated procedures could be performed outside the top universities and labs.

As part of streamlining the work process of genetic engineers, some elements would be virtualized. As a matter of fact, the Virtualization of genetic engineering has been taking place over the past two decades, with scientists ordering DNA and RNA codes from the internet, and browsing over virtual genomic databases like NCBI and UCSC. The next step of virtualization seems to be occurring right now, with companies like Genome Compiler creating ‘browsers’ for the genome, with bright colors and easily understandable explanations that reduce the level of skill needed to plan an experiment involving genetic engineering.

6.png
A screenshot from Genome Compiler

How can we apply the pathway of Product Intelligence to genetic engineering? Quite easily: virtual platforms for designing genetic engineering experiments will involve AI engines that will aid the experimenter with his task. The AI assistant will understand what the experimenter wants to do, suggest ways, methodologies and DNA sequences that will help him accomplish it, and possibly even – in a decade or two – conduct the experiment automatically. Obviously, that also answers the criteria of Interactivity.

If this described future sounds far-fetched, you should take into account that there are already lab robots conducting the most convoluted experiments, like Adam and Eve (see below). As the field of robotics makes strides forward, it is actually possible that we will see similar rudimentary robots working in makeshift biology Do-It-Yourself labs.

Networking and Globalization are essentially the same for the purposes of this discussion, and complement Virtualization nicely. Communities of biology enthusiasts are already forming all over the world, and they’re sharing their ideas and virtual schematics with each other. The iGEM (International Genetically Engineered Machines) annual competition is a good evidence for that: undergraduate students worldwide are taking part in this competition, designing parts of useful genetic code and sharing them freely with each other. That’s Networking and Globalization for sure.

Last but not least, we have Convergence – the convergence of processes, products and services into a single overarching system of genetic engineering.

Well, then, what would a convergence of all the above pathways look like?

 

The Convergence of Genetic Engineering

Taking together all of the pathways and converging them together leads us to a future in which genetic engineering can be performed by nearly anyone, at any place. The process of designing genetic engineering projects will be largely virtualized, and will be aided by artificial assistants and advisors. The actual genetic engineering will be conducted in sophisticated labs – as well as in makers’ houses, and in DIY enthusiasts’ kitchens. Ideas for new projects, and designs of successful past projects, will be shared on the internet. Parts of this vision – like virtualization of experiments – are happening right now. Other parts, like AI involvement, are still in the works.

What does this future mean for us? Well, it all depends on whether you’re optimistic or pessimistic. If you’re prone to pessimism, this future may look to you like a disaster waiting to happen. When teenagers and terrorists are capable of designing and creating deadly bacteria and viruses, the future of mankind is far from safe. If you’re an optimist, you could consider that as the power to re-engineer life comes down to the masses, innovations will rise everywhere. We will see glowing trees replacing lightbulbs in the streets, genetically engineered crops with better traits than ever before, and therapeutics (and drugs) being synthetized in human intestines. The truth, as usual, is somewhere in between – and we still have to discover it.

 

Conclusion

If you’ve been reading this blog for some time, you may have noticed a recurring pattern: I’ll be inquiring into a certain subject, and then analyzing it according to a certain foresight methodology. Such posts have covered so far the Business Theory of Disruption (used to analyze the future of collectible card games), Causal Layered Analysis (used to analyze the future of aerial drones and of medical mistakes) and Pace Layer Thinking. I hope to go on giving you some orderly and proven methodologies that help thinking about the future.

How you actually use these methodologies in your business, class or salon talk – well, that’s up to you.

 

 

When the Marine Corps is Using Science Fiction to Prepare for the Future

When most of us think of the Marine Corps, we usually imagine sturdy soldiers charging headlong into battle, or carefully sniping at an enemy combatant from the tops of buildings. We probably don’t imagine them reading – or writing – science fiction. And yet, that’s exactly what 15 marines are about to do in two weeks from now.

The Marine Corps Warfighting Lab (I bet you didn’t know they have one) and The Atlantic Council are holding a Science Fiction Futures Workshop in early February. And guess what? They’re looking for “young, creative minds”. You probably have to be marines, but even if you aren’t – maybe you’ll have a chance if you submit your application as well.

marines futures workshop

 

Forecast: Flying Cars by 2035

Whenever a futurist talks about the future and lays out all the dazzling wealth technological advancements hold in store for us, there is one question that is always asked by the audience.

“Where is that flying car you promised me?”

Well, we may be drawing near to a future of flying cars. While the road to that future may still be long and arduous, I’m willing to forecast that in twenty years from now we will have flying cars for use by civilians – but only if three technological and societal conditions will be fulfilled by that time.

In order to understand these conditions, let us first examine briefly the history of flying cars, and understand the reasons behind their absence in the present.

 

Flying Cars from the Past

Surprising as it may be, the concept of flying cars has been around far longer than the Back to the Future trilogy. Henry Ford himself had produced in 1926 a rudimentary and experimental ‘flying car’, although really it was more of a mini-airplane for the average American consumer. Despite the excitement from the public, the idea crashed and burned in two years, together with the prototype and its test pilot.

skycar10.jpg
One of the forgotten historical flying cars. A prototype of the Ave Mizar.

Since the 1920s, it seems like innovators and inventors came up with flying cars almost once a decade. You can see pictures of some of these cars in Popular Mechanics’ gallery. Some crashed and burned, in the tradition set by Ford. Others managed to soar sky high. None actually made it to mass production, for two main reasons:

  • Extremely wasteful: flying cars are extremely wasteful in terms of fuel consumption. Their energy efficiency is abysmal when compared to that of high-altitude and high-speed airplanes.
  • Extremely unsafe: let’s be honest for a moment, OK? You give people cars that can drive in what is essentially a one-dimensional road, and what do they do? They make traffic accidents. What do you think would happen if you gave everyone the ability to drive a car in three dimensions? Crash, crash and burn all over again. For flying cars to become widely used in society, everyone needs to take flying lessons. Good luck with that.

These two limitations together made sure that flying cars to the masses were left a fantasy – and still largely are. In fact, I would go as far as saying that any new concept or prototype of a flying car that does not take these challenges into account, is only presented to the public as a ‘flying car’ as a publicity stunt.

But now, things are beginning to change, because of three trends that together will provide answers to the main barriers standing in the way of flying cars.

 

The Three Trends that will Enable Flying Cars

There are three trends that, combined, will enable the use of flying cars by the public within twenty years.

First Trend: Massive Improvement in Aerial Drones Capabilities

If you visit your city’s playgrounds, you may find children there having fun flying drones around. The drones they’re using – which often cost less than $200 – would’ve considered highly sophisticated weapons of war just twenty years ago, and would’ve been sold by arms manufactures at prices in the order of millions of dollars.

bendboydrone.jpg
14 years old Morgan Tien with his drone. Source: Bend Bulletin

Dr. Peter Diamandis, innovator, billionaire and futurist, has written in 2014 about the massive improvement in capabilities of aerial drones. Briefly, current-day drones are a product of exponential improvement in computing elements (inertial measurement units), communications (GPS receivers and system), and even sensors (digital cameras). All of the above – at their current sizes and prices – would not have been available even ten years ago.

Aerial drones are important for many reasons, not least because they may yet serve as the basis for a flying car. Innovators, makers and even firms today are beginning to strap together several drones, and turn them into a flying platform that can carry individuals around.

The most striking example of this kind comes from a Canadian inventor who has recently flown 275 meters on a drone platform he has basically fashioned in his garage.

Another, a more cumbersome version of Human-Transportation Drones (Let’s call them HTD from now on, shall we?) was demonstrated this week at the Las Vegas Convention Center. It is essentially a tiny helicopter with four double-propellers attached, much like a large drone. It has place for just one traveler, and can fly up to 23 minutes according to the manufacturers. Most importantly, the Ehang 184 as it’s called is supposed to be autonomous, which brings us straight to the next trend: the rise of machine intelligence.

ehang-184-aav-passenger-drone-12.jpeg
Ehang 184. Credit: Ehang. Originally found on Gizmag.

Second Trend: Machine Intelligence and Flying Cars

There can be little question that drones will keep on improving in their capabilities. We will improve our understanding of the science and technology behind aerial drones, and develop more efficient tools for aerial travel, including some that will carry people around. But will these tools be available for mass-use?

This is where the safety barrier comes into the picture. You can’t let the ordinary Joe Shmoe control a vehicle like the Ehang 184, or even a light-weight drone platform. Not without teaching them how to fly the thing, which would take a long time to practice, lots of money, and will sharply limit the number of potential users.

This is where machine intelligence comes into the picture.

Autonomous control is virtually a must for publicly usable HTDs. Luckily, machine intelligence is making leaps and bounds forward, with autonomous (driverless) cars travelling the roads even today. If such autonomous systems can function for cars on the roads, why not do the same for drones in the air?

As things currently stand, all aerial drones will have to be controlled at least partly-autonomously, in order to prevent collisions with other drones. NASA is planning a “Traffic Management Convention” for drones, which could include tens of thousands of drones – and much more than that, if the need arises. The next logical step, therefore, is to include future HTDs into this future system, thus taking the control out of the pilot’s hands and transferring it completely to the vehicle and the system controlling it.

If the said system for managing aerial traffic becomes a reality, and assuming that drones capabilities are advanced enough to provide human transportation services, then autonomous HTDs for mass use will not be far behind.

The two last trends have covered the second barrier of inherent unsafety. The third trend I will present now deals with the first barrier of inefficient and wasteful use of energy.

Third Trend: Solar Energy

All small drones rely on electricity to function. Even a larger drone like the Ehang 184 that could be used for human transport, is powered by electricity, and can fly for 23 minutes before requiring a recharge. While 23 minutes may not sound like a lot of time, it’s more than enough for people to ‘hop’ from one side of most cities to the other, as long as there isn’t aerial congestion.

Of course, that’s the situation today. But batteries keep on improving. Elon Musk claims that by 2017, Tesla’s electric cars will have a 600 mile range on a single charge, for example. As batteries improve further, HTDs will be able to stay in the air for even longer periods of time, despite being powered by electricity alone. The adherence to electricity is important since in twenty years from now it is highly likely that we’ll have much cheaper electric energy coming directly from the sun.

Support for this argument comes from the exponential decline in the costs associated with producing and utilizing solar energy. Forty years ago, it would’ve cost about $75 to produce one watt of solar energy. Today the cost is less than a single dollar per watt. And as prices go down, the number of solar panels installation soars sky-high, roughly doubling itself every two years. Worldwide solar capacity in 2014 has been 53 times higher than in 2005.

solar panels.jpg
Credit: Earth Policy Institute / Bloomberg. Originally found on Treehugger.

If the rising trend of solar energy does not grind to a halt sometime in the next decade, then we will obtain much of our electric energy from the sun. We won’t have usable passenger solar airplanes – these need high-energy jet fuel to operate – but we will have solar panels pretty much everywhere: covering the sides and top of every building, and quite possibly every car as well. Buildings would both consume and produce energy. Much of the unneeded energy would be saved in batteries, or almost instantaneously diverted via the smart grid to other spots in the city where it’ll be needed.

If that is the face of the future – and the trends support this view – then HTDs could be an optimal way of transportation in the city of the future. Aerial drones could be deployed on tops of houses and skyscrapers, where they will be constantly charged by solar panels until they need to take a passenger to another house. Such a leap would only take 10-15 minutes, followed by a recharging period of 30 minutes or so. The entire system would operate autonomously – without human control or interference – and be powered by the sun.

 

Conclusions and Forecast for the Future

When can we expect this system to be deployed? Obviously it’s difficult to be certain about the future, particularly in cases where technological trends meet with societal, legal and political barriers to entry. Current culture will find it difficult to accept autonomous vehicles, and Big Fossil Fuel firms are still trying to pretend solar energy isn’t here to stay.

All the same, it seems that HTDs are already rearing their heads, with several inventors working separately to produce them. Their attempts are still extremely hesitant, but every attempt demonstrates the potential in HTDs and their viability for human transportation. I would therefore expect that in the next five years we will see demonstrations of HTDs (not for public use yet) that can carry individuals to a distance of at least one mile, and can be fully charged within one hour by solar panels alone. That is the easy forecast to make.

The more difficult forecast involves the use of autonomous aerial drones, the assimilation of HTDs into an overarching system that controls all the drones in a shared aerial space, and a mass-deployment of HTDs in a city. Each of these achievements needs to be made separately in order to fulfill the larger vision of a flying car to the masses. I am going to take a wild guess here, and suggest that if no Hindenburg-like disaster happens, then we’ll see real flying cars in our cities in twenty years from now – by the year 2035. It is likely that these HTDs will only be able to carry a single individual, and will probably be used more as a ‘flying taxi’ service between buildings to individual businessmen than a full-blown family flying car.

And then, finally, when people ask me where their flying car is, I will be able to provide a simple answer: “It’s parked on the roof.”

Four Robot Myths it’s Time We Let Go of

A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.

As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!

12489892_10206643949390298_612958140_o.jpg
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.

 

First Myth: Robots must be shaped as Humanoids

Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.

Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.

telepresence_options_robots.png
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.

 

Second Myth: Robots have a Computer for a Brain

This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.

The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.

Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.

 

This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.

 

Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.

 

Third Myth: Robots as Individual Units

This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.

Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.

The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –

“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”

tesla-model-x-elon-musk.jpg
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.

Fourth Myth: Robots can’t make Moral Decisions

In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.

Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?

These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.

And so, robots must make moral decisions.

 

Conclusion

As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?

This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?

That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!

 

Forecast for 2016: The Year of the Data Race; or – How Our Politicians Will Mess with Our Minds in 2016

Almost four years ago, the presidential elections took place in the United States. Barack Obama competed against Mitt Romney in the race for the White House. Both candidates delivered inspiring speeches, appeared in every institute that would accept their presence, and employed hundreds of paid consultants and volunteers who advertised them throughout the nation. In the end, Obama won the race for the presidency, possibly because of his opinions and ideas… or because of his reliance on data scientists. In fact, as Sasha Issenberg’s article of the 2012 elections in MIT Technology Review describes –

“Romney’s data science team was less than one-tenth the size of Obama’s analytics department.”

How did Obama utilize all of those data scientists?

 

Barack_Obama_Inauguration_Oath.jpg

Analyzing the Individual Voter

Up to 2012, individual voters were analyzed according to a relatively simplistic system which only took into account very limited parameters such as age, place of living, etc. The messages those potential voters received to their phones, physical mailboxes and virtual inboxes were customized according to these parameters. Obama’s team of data scientists expanded the list of parameters into tens of different parameters and criteria. They then utilized a system in which customized messages were mailed to certain representative voters, who were later surveyed so that the scientists could figure out how their opinions changed according to the structure of the messages sent.

This level of analysis and understanding of the individual voters and the messages that helped them change their opinions aided Obama in delivering the right messages, at the right time, to the persuadable people. If the term “persuadable” strikes you as sinister, as if Obama’s team were preying on the weak of mind or those sitting on the fence, you should be aware that it was used by Terry Walsh, who coordinated Obama’s campaign’s polling and paid-media spending.

Of course, being a “persuadable” voter does not mean that you’re a helpless dummy. Rather, it just means that you’re still uncertain which way to turn. But when political parties can find those undecided voters, focus on them and analyze each one with the most sophisticated computer models available to find out all about their levers and buttons, how much free choice does that leave those people?

I could go on describing other strategies utilized by Obama’s team in the 2012 elections. They identified voters who were likely to ‘switch sides’ following just one phone call, and had about 500,000 conversations with those voters. They supplied to a data collection firm the addresses of many “easily persuadable” voters, and received in return the records of TV watching in those households. That way, the campaign team could maximize the efficiency of TV advertisements – fitting them to the right time, in the right channels, and in the right destinations. All of the above is well recorded, and described in Issenberg’s article and other resources (like this, that, and others).

 

The Republican Drowning Whale

Obama wasn’t the only one to utilize big data and predictive analytics in the 2012 campaign. His opponent, Mitt Romney, had a team of data scientists of his own. Unfortunately for Romney, his team didn’t even come close to the level of operations of Obama’s team. Romney’s team invested much of its effort in an app named Orca, which was supposed to indicate which of the expected republican voters actually turned up to vote – and to send messages to the republican slackers and encourage them to haul their tucheses to the voting booths. In practice, the app was horribly conceived, and crashed numerous times during Election Day, leading to utter confusion about the goings on.

 

pack_romney_away1.jpg
Mitt Romney being packed up after the massive failure of the Orca system in the 2012 predisential elections. Image originally from Phil Ebersole’s blog.

Regardless of the success of the Democrats data systems vs. the Republicans’ ones, one thing is clear: both parties are going to use big data and predictive analytics in the upcoming 2016 elections. In fact, we are going into a very interesting stage in the history of the 21st century: the Data Race.

 

From Space to Data

The period in time known as the Space Race took place in the 1960s, when the United States competed against the Soviet Union in a race to space. As a result of the Space Race, space launch technologies developed and made progress in leaps and bounds, with both countries fighting to demonstrate their superior science and technology. Great need – and great budgets – produce great results quickly.

In 2016, we will see a new kind of race starting – the Data Race. In 2012 it wasn’t really a race. The Democrats basically stepped on the Republicans. In 2016, however, the real Data Race in politics will be on: The Democrats will gather their teams of data scientists once more, and build up on the piles of data that were gathered in the 2012 elections and since then. The Republicans – possibly Trump with his self-funded election campaign – will learn from their mistakes in 2012, hire the best data scientists they can find, and utilize methodologies similar or better than those developed by the Democrats.

In short, both parties will find themselves in the midst of a Data Race, striving to obtain as much data as they can about the American citizen, about our lifestyles, habits, choices and any other tidbit of information that can be used to understand the individual voter – and how best to approach him or here and convert him to the party’s point of view. The data gathering and analysis systems will cost a lot, obviously, but since recent rulings in America allow larger contributions to be made to political candidates, money should not be a problem.

 

Conclusion: Where are We Heading?

It’s quite obvious that both American parties in 2016 are going to compete in a Data Race between them. The bigger questions is whether we should even allow them to do it so freely. Democracy, after all, is based on the assumption that every person can make his or her own mind and decisions. Do we really honor that core assumption, when political candidates can analyze human beings with the power of super-computers, big data and predictive analytics? Can an individual citizen truly choose freely, when powers on both sides are pulling and pushing at that individual’s levers and buttons, with methods tested and proven on millions of similarly-minded individuals?

Using predictive analytics in politics holds an inherent threat to democracy: by understanding each individual, we can also devise approaches and methodologies to influence every individual with maximal efficiency. This approach has the potential to turn most individuals into mere puppets in the hands of the powerful and the affluent.

Does that mean we should refrain from using big data and predictive analytics in politics? Of course not – but we can regulate its use so that instead of campaign managers focusing their efforts on the “easily persuadable”, they will use the data gleaned from the public to understand people’s real concerns and work to address them. We should all hope our politicians are heading in that direction, and if they aren’t – we should give them a shove towards it.

 

 

Are we Entering the Aerial Age – or the Age of Freedom?

A week ago I covered in this blog the possibility of using aerial drones for terrorist attacks. The following post dealt with the Failure of Myth and covered Causal Layered Analysis (CLA) – a futures studies methodology meant to counter the Failure of Myth and allow us to consider alternative futures radically different from the ones we tend to consider intuitively.

In this blog post I’ll combine insights from both recent posts together, and suggest ways to deal with the terrorism threat posed by aerial drones, in four different layers suggested by CLA: the Litany, the Systemic view, the Worldview, and the Myth layer.

To understand why we have to use such a wide-angle lens for the issue, I would compare the proliferation of aerial drones to another period in history: the transition between the Bronze Age and the Iron Age.

 

From Bronze to Iron

Sometime around 1300 BC, iron smelting was discovered by our ancient forefathers, assumedly in the Anatolia region. The discovery rapidly diffused to many other regions and civilizations, and changed the world forever.

If you ask people why iron weapons are better than bronze ones, they’re likely to answer that iron is simply stronger, lighter and more durable than bronze. However, the truth is that bronze weapons are not much more efficient than iron weapons. The real importance of iron smelting, according to “A Short History of War” by Richard A. Gabriel and Karen S. Metz, is this:

“Iron’s importance rested in the fact that unlike bronze, which required the use of relatively rare tin to manufacture, iron was commonly and widely available almost everywhere… No longer was it only the major powers that could afford enough weapons to equip a large military force. Now almost any state could do it. The result was a dramatic increase in the frequency of war.”

It is easy to imagine political and national leaders using only the first and second layer of CLA – the Litany and the Systemic view – at the transition from the Bronze to the Iron Age. “We should bring these new iron weapons to all our soldiers”, they probably told themselves, “and equip the soldiers with stronger shields that can deflect iron weapons”. Even as they enacted these changes in their armies, the worldview itself shifted, and warfare was vastly transformed because of the large number of civilians who could suddenly wield an iron weapon. Generals who thought that preparing for the change merely meant equipping their soldiers with an iron weapon, found themselves on the battlefield facing armies much larger than their own, because of new conscription models that their opponents had developed.

Such changes in warfare and in the existing worldview could have been realized in advance by utilizing the third and fourth layers of CLA – the Worldview and the Myth.

Aerial drones are similar to Iron Age weapons in that they are proliferating rapidly. They can be built or purchased at ridiculously low prices, by practically everyone. In the past, only the largest and most technologically-sophisticated governments could afford to employ aerial drones. Nowadays, every child has them. In other words, the world itself is turning against everything we thought we knew about the possession and use of unmanned aerial vehicles. Such dramatic change – that our descendants may yet come to call The Aerial Age when they look back in history – forces us to rethink everything we knew about the world. We must, in short, analyze the issue from a wide-angle view, with an emphasis on the third and fourth layer of CLA.

How, then, do we deal with the threat aerial drones pose to national security?

 

First Layer: the Litany

The intuitive way to deal with the threat posed by aerial drones, is simply to reinforce the measures and we’ve had in place before. Under the thinking constraints of the first layer, we should basically strive to strengthen police forces, and to provide larger budgets for anti-terrorist operations. In short, we should do just as we did in the past, but more and better.

It’s easy to see why public systems love the litany layer, since these measures create reputation and generate a general feeling that “we’re doing something to deal with the problem”. What’s more, they require extra budget (to be obtained from congress) and make the organization larger along the way. What’s there not to like?

Second Layer: the Systemic View

Under the systemic view we can think about the police forces, and the tools they have to deal with the new problem. It immediately becomes obvious that such tools are sorely lacking. Therefore, we need to improve the system and support the development of new techniques and methodologies to deal with the new threat. We might support the development of anti-drone weapons, for example, or open an entirely new police department dedicated to dealing with drones. Police officers will be trained to deal with aerial drones, so that nothing is left for chance. The judicial and regulatory systems are lending themselves to the struggle at this layer, by issuing highly-regulated licenses to operate aerial drones.

 

ray-gun.jpg
An anti-drone gun. Originally from BattelleInnovations and downloaded from TechTimes

 

Again, we could stop the discussion here and still have a highly popular set of solutions. As we delve deeper into the Worldview layer, however, the opposition starts building up.

Third Layer: the Worldview

When we consider the situation at the worldview layer, we see that the proliferation of aerial drones is simply a by-product of several technological trends: miniaturization and condensation of electronics, sophisticated artificial intelligence (at least in terms of 20-30 years ago) for controlling the rotor blades, and even personalized manufacturing with 3D-printers, so that anyone can construct his or her own personal drone in the garage. All of the above lead to the Aerial Age – in which individuals can explore the sky as they like.

 

2EDCBCF500000578-3336860-image-a-88_1448662571275.jpg
Exploration of the sky is now in the hands of individuals. Image originally from DailyMail India.

 

Looking at the world from this point of view, we immediately see that the vast expected proliferation of aerial drones in the near decade would force us to reconsider our previous worldviews. Should we really focus on local or systemic solutions, rather than preparing ourselves for this new Aerial Age?

We can look even further than that, of course. In a very real way, aerial drones are but a symptom of a more general change in the world. The Aerial Age is but one aspect of the Age of Freedom, or the Age of the Individual. Consider that the power of designing and manufacturing is being taken from nations and granted to individuals via 3D-printers, powerful personal computers, and the internet. As a result of these inventions and others, individuals today hold power that once belonged only to the greatest nations on Earth. The established worldview, in which nations are the sole holders of power is changing.

When one looks at the issue like this, it is clear that such a dramatic change can only be countered or mitigated by dramatic measures. Nations that want to retain their power and prevent terrorist attacks will be forced to break rules that were created long ago, back in the Age of Nations. It is entirely possible that governments and rulers will have to sacrifice their citizens’ privacy, and turn to monitoring their citizens constantly much as the NSA did – and is still doing to some degree. When an individual dissident has the potential to bring harm to thousands and even millions (via synthetic biology, for example), nations can ill afford to take any chances.

What are the myths that such endeavors will disrupt, and what new myths will they be built upon?

Fourth Layer: the Myth

I’ve already identified a few myths that will be disrupted by the new worldview. First and foremost, we will let go of the idea that only a select few can explore the sky. The new myth is that of Shared Sky.

The second myth to be disrupted is that nations hold all the technological power, while terrorists and dissidents are reduced to using crude bombs at best, or pitchforks at worst. This myth is no longer true, and it will be replaced by a myth of Proliferation of Technology.

The third myth to be dismissed is that governments can protect their citizens efficiently with the tools they have in the present. When we have such widespread threats in the Age of Freedom, governments will experience a crisis in governance – unless they turn to monitoring their citizens so closely that any pretense of privacy is lost. And so, it is entirely possible that in many countries we will see the emergence of a new myth: Safety in Exchange for Privacy.

 

Conclusion

Last week I’ve analyzed the issue of aerial drones being used for terrorist attacks, by utilizing the Causal Layered Analysis methodology. When I look at the results, it’s easy to see why many decision makers are reluctant to solve problems at the third and fourth layer – Worldview and Myth. The solutions found in the lower layers – the Litany and the Systemic view – are so much easier to understand and to explain to the public. Regardless, if you want to actually understand the possibilities the future holds in any subject, you must ignore the first two layers in the long term, and focus instead on the large picture.

And with that said – happy new year to one and all!

The Failure of Myth and the Future of Medical Mistakes

 

Please note: this is another chapter in a series of blog posts about Failures in Foresight. You may want to also read the other blog posts dealing with the Failure of Nerve, the Failure of the Paradigm, and the Failure of Segregation.

 

At the 1900 World Exhibition in Paris, French artists made an attempt to forecast the shape of the world in 2000. They produced a few dozens of vivid and imaginative drawings (clearly they did not succumb to the Failure of the Paradigm!)

Here are a few samples from the World Exhibition. Can you tell what all of those have in common with each other?

military-cycles-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Police motorcycles in the year 2000
skype-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Skype in the year 2000
phonographs-what-1900-french-artists-thought-the-year-2000-would-look-like.jpg
Phonecalls and radio in the year 2000
birding-what-1900-french-artists-thought-the-year-200-would-be-like.jpg
Fishing for birds in the year 2000

 

Psychologist Daniel Gilbert wrote about similar depictions of the future in his book “Stumbling on Happiness”

“If you leaf through a few of them, you quickly notice that each of these books says more about the times in which it was written than about the times it was meant to foretell.”

You only need to take another look at the images to convince yourselves of the truth of Gilbert’s statement. The women and men are dressed in the same way they were dressed in 1900, except for when they go ‘bird hunting’ – in which case the gentlemen wear practical swimming suits, whereas the ladies still stick with their cumbersome dresses underwater. Policemen still employ swords and brass helmets, and of course there are no policewomen. Last but not least, it seems that the future is entirely reserved to the Caucasian race, since nowhere in these drawings can you see persons of African or Asian descent.

 

The Failure of Myth

While some of the technologies depicted in these ancient paintings actually became reality (Skype is a nice example), it clear the artists completely failed to capture a larger change. You may call this a change in the zeitgeist, the spirit of the generation, or in the myths that surround our existence and lives. I’ll be calling this A Failure of Myth, and I hope you’ll agree that it’s impossible to consider the future without also taking into account these changes in our mythologies and underlying social and cultural assumptions: men can be equal to women, colored folks have rights similar to white folks, and people of the LGBT have just the same right to exist as heterosexuals. None of these assumptions would’ve been obvious, or included in the myths and stories upon which society is bases, a mere fifty years ago. Today they’re being taken for granted.

 

1013px-USMC-09611.jpg
The myth according to which black people have very few real rights was overturned in the 1960s. Few forecasters thoguht of such an occurence in advance.

 

Could we ever have forecast these changes?

Much as in the Failure of the Paradigm, I would posit that we could never accurately forecast the future ways in which myths and culture is about to change. We could hazard some guesses, but that’s just what they are: a guesswork that relies more on our myths in the present, than on solid understanding of the future.

That said, there are certain methodologies used by foresight researchers that could help us at least chart different solutions to problems in the present, in ways that force us to consider our current myths and worldviews – and challenge them when needed. These methodologies allow us to create alternative futures that could be vastly different from the present in the ways that really matter: how people think of themselves, of each other, and of the world around them.

One of the best known methodologies used for this purpose is called Causal Layered Analysis (CLA). It was invented by futures studies expert Sohail Inayatullah, who also describes case studies for using it in his recent book “What Works: Case Studies in the Practice of Foresight”.

In the rest of this blog post, I’ll sum up the practical principles of CLA, and show how they could be used to analyze different issues dealing with the future. Following that, in the next blog post, we’ll take a look again at the issue of aerial drones used for terrorist attacks, and use CLA to consider ways to deal with the threat.

 

Mines_1.jpg
Another Failure of Myth: the ancient Greek could not imagine a future without slavery. None of their great philosophers could escape the myth of slavery. Image originally from Wikipedia

 

 

CLA – Causal Layered Analysis

The core of CLA the idea that every problem can be looked at in four successive layers, each deeper than the previous one. Let’s look at each layer at its turn, and see how each layer adds depth to a discussion about a certain problem: the “high rate of medical mistakes leading to serious injury or death”, as Inayatullah describes in his book. My brief analysis of this problem at every level is almost entirely based on his examples and thoughts.

First Layer: the Litany

The litany is the day-to-day talk. When you’re arguing at dinner parties about the present and the future, you’re almost certainly using the first layer. You’re basically repeating whatever you’ve heard from the media, from the politicians, from thought leaders and from your family. You may make use of data and statistics, but these are only interpreted according to the prevalent and common worldview that most people share.

When we rely on the first layer to consider the issue of medical mistakes, we look at the problem in a largely superficial manner. We can sum the approach in one sentence: “physicians make mistakes? Teach them better, and if they still don’t improve, throw them to jail!” In effect, we’re focusing on the people who are making the mistake – the ones whom it’s so easy to blame. The solutions in this layer are usually short-term solutions, and can be summed up in short sentences that appeal to audiences who share the same worldview.

Second Layer: the Systemic View

Using the systemic view of the second layer, we try to delve deeper into the issue. We don’t blame people anymore (although that does not mean we remove the responsibility to their mistakes from their shoulders), but instead we try to understand how the system itself can contribute to the actions of the individual. To do that we analyze the social, economic and political forces that meld the system into its current shape.

In the case of medical mistakes, the second layer encourages us to start asking tougher questions about the systems under which physicians operate. Could it be, for example, that physicians are rushing their treatments since they are only allowed to talk with each patient 5-10 minutes, as is the custom in many public medical services? Or perhaps the shape of the hospital does not allow physicians to consult easily with each other, thus reaching more solid solutions via teamwork?

The questions asked in the second layer mode of thinking allow us to improve the system itself and make it more efficient. We do not take the responsibility off the shoulders of the individuals, but we do accept that better systems allow and encourage individuals to reach their maximum efficiency.

Third Layer: Worldview

This is the layer where things get hoary for most people. In this layer we try to identify and question the prevalent worldview and how it contributes to the issue. These are our “cognitive lenses” via which we view and interpret the world.

As we try to analyze the issue of medical mistakes in the third layer, we begin to identify the worldviews behind medicine. We see that in modern medicine, the doctor is standing “high above” in the hierarchy of knowledge – certainly much higher than patients. This hierarchy of knowledge and prestige defines the relationship between the physician and the patient. As we understand this worldview, solutions that would’ve fit in the second layer – like the time physicians spend with patients – seem more like a small bandage on a gut wound, than an effective way to deal with the issue.

Another worldview that can be identified and challenges in this layer is the idea that patients actually need to go to clinics or to hospitals for check-ups. In an era of tele-presence and electronics, why not make use of wearable computing or digital doctors to take care of many patients? As we see this worldview and propose alternatives, we find that systemic solutions like “changing the shape of the hospitals” become unnecessary once more.

Fourth Layer: the Myth

The last layer, the myth, deals with the stories we tell ourselves and our children about the world and the ways things work. Mythologies are defined by Wikipedia as –

“a collection of myths… [and] stories … [that] explain nature, history, and customs.”

Make no mistake: our children’s books are all myths that serve to teach children how they should behave in society. When my son reads about Curious George, he learns that unrestrained curiosity can lead you into danger, but also to unexpected rewards. When he reads about Hensel and Gretel, he learns of the dangers of trusting strangers and step-moms. Even fantasy books teach us myths about the value of wisdom, physical prowess and even beauty as the tall, handsome prince saves the day. Myths are perpetuated everywhere in culture, and are constantly strengthened in our minds through the media.

What can we say about medical mistakes in the Myth level? Inayatullah believes that the deepest problem, immortalized in myth throughout the last two millennia, is that “the doctor knows best”. Patients are taught from a very young age that the physician’s verdict is more important than their own thoughts and feelings, and that they should not argue against it.

While I see the point in Inayatullah’s view, I’m not as certain that it is the reason behind medical mistakes. Instead, I would add a partner-myth: “the human doctor knows best”. This myth is spread to medical doctors in many institutes, and makes it more difficult to them to rely on computerized analysis, or even to consider that as human beings they’re biased by nature.

 

Consolidating the Layers

As you may have realized by now, CLA is not used to forecast one accurate future, but is instead meant to deepen our thinking about potential futures. Any discussion about long-term issues should open with an analysis of those issues in each of the four layers, so that the solutions we propose – i.e. the alternative futures – can deal not only with the superficial aspects of the issue, but also with the deeper causes and roots.

 

Conclusion

The Failure of Myth – i.e. our difficulty to realize that the future will not only change technologically, but also in the myths and worldviews we hold – is impossible to counter completely. We can’t know which myths will be promoted by future generations, just as we can’t forecast scientific breakthroughs fifty years in advance.

At most, we can be aware of the existence of the Failure of Myth in every discussion we hold about the future. We must assume, time after time, that the myths of future generations will be different from ours. My grandchildren may look at their meat-eating grandfather in horror, or laugh behind his back at his pants and shirt – while they walk naked in the streets. They may believe that complicated decisions should be left solely to computers, or that physical work should never be performed by human beings. These are just some of the possible myths that future generations can develop for themselves.

In the next blog post, I’ll go over the issue of aerial drones use for terrorist attacks, and analyze it by using CLA to identify a few possible myths and worldviews that we may need to change in order to deal with this threat.

 

Please note: this is another chapter in a series of blog posts about Failures in Foresight. You may want to also read the other blog posts dealing with the Failure of Nerve, the Failure of the Paradigm, and the Failure of Segregation.