I am a Futures Studies researcher with special expertise in foresight, wild cards (black swans), and analysis of emerging and disruptive technologies. I have spoken to hundreds of conference audiences and appeared frequently on TV and radio to discuss for a wide range of listeners and viewers the fascinating study of the future of technology and society. Most recently, I have been lecturing for and providing consultation to entities worldwide including large companies and firms, the Lahav Management School at Tel Aviv University and other educational institutions, and as an invited keynote lecturer to innovation and global thinking workshops internationally including Greece, Kazakhstan, and Belarus.
One of the episodes which the United States Post Service (USPS) is trying to sweep under the rug, is their attempt to use rocket mail. i.e. send mail on actual rockets. Oh, and you might want to know that the first rocket to be used that way by the USPS was a nuclear missile.
The idea of rocket mail was originally developed in germany in the 19th century, but never really took off. However, as missile technology improved, the USPS took note of the idea and decided to give it a shot. And so, the first official American rocket mail was launched in 1959.
The USPS chose for that purpose a Regulus cruise missile armed with a nuclear warhead. They stripped took the warhead off, and replaced it with mail containers that were supposed to withstand the impact when the missile hit the ground. The missile was launched from Virginia and reached its destination in Florida in just 22 minutes. Since the two states are around 700 miles apart (~1,200 km) that means the mail got to its destination at a speed of around 3,500 km/h. That’s pretty impressive for mail delivery.
The US Postmaster General got so excited that he publicly claimed that the event is –
“of historic significance to the peoples of the entire world”
and that –
“Before man reaches the moon, mail will be delivered within hours from New York to California, to Britain, to India or Australia by guided missiles. We stand on the threshold of rocket mail.”
Except, as we know very well now, they really didn’t stand on the threshold of rocket mail. The costs of rocket mail were too high – particularly for the infrastructure involved, which included the launch systems and the missile itself which couldn’t be reused. What’s more, at the same time that rocket mail was first attempted, international air travel became dramatically cheaper, so that important packages could easily be delivered in just a single day over the ocean, with no need for rockets of any kind.
Rocket mail is an historic invention which many futurists should consider whenever they gush about new technologies taking over the world. In the end, it all comes down to cost, and if that new technology is more expensive than what you currently use – or even from other technologies that are being developed at the same time – it probably won’t be used after all.
But at least the idea of rocket mail finally found use in Mission Impossible II after all – where Ethan Hunt’s sunglasses are being delivered to him via a rocket. The US Postmaster General can be proud indeed.
Many economists and philosophers are trying to figure out today about the future of work. What will people do once robots and autonomous systems can perform practically all tasks in the workplace better than human beings can?
Well, here’s a little well-known secret: many of us are already unemployed. Many, many more than governmental statistics indicate. We just haven’t realized it yet.
Why? Because plenty of people today are working in bullshit jobs, in the words of anthropologist David Graeber. Here’s what he has to say about bullshit jobs –
“…more and more employees find themselves… working 40 or even 50 hour weeks on paper, but effectively working 15 hours… since the rest of their time is spent organizing or attending motivational seminars, updating their facebook profiles or downloading TV box-sets.”
Think of your own job. If you work at a desk or at an office, some of your days probably look approximately like this:
You come into the office in the morning.
You chat with your co-workers for 15 minutes.
You open your computer and chat with your friends on Facebook for another hour.
You feel compelled to do some work. You open a document you began work on yesterday, work on it for ten minutes, then excuse yourself to check your e-mails, then your facebook again, then read answers on Quora (try my content for more future-related answers!), then play just one game of Solitaire…
…and two hours later, you return to reality and realize you haven’t done any significant work today. You resolve to work harder, immediately after lunch.
Lunch takes an hour.
And then you’re drowsy for yet another hour. Luckily, that’s the time for the weekly departmental motivational seminar, during which you can safely sleep while nodding your head vigorously at the same time and grunting affirmatively.
Finally, you realize with a shuddder that it’s almost the end of the workday. You feel guilty and ashamed, and so, in a concentrated effort of 1–2 hours, you actually SIT DOWN AND WORK.
And the amazing thing is that in those 1–2 hours of work, you actually complete an amount of work that used to require an entire office of secretaries to perform a few decades ago. That’s because you’re using smart and automated tools like Microsoft Office Word, Excel and Powerpoint. These tools increase productivity, so that a single person who is proficient in using them can do more in a shorter period of time.
So why do so many of us still work for eight hours a day? Why do so many people work at jobs that they know are ineffective, and in which they waste their time?
Simply put, because human beings need the illusion of being useful, or at least of doing something with their lives. They need to preserve a veneer of action – even though much of that action throughout the workday is almost entirely fictional.
Now, obviously, many of us do not work at a bullshit job… yet. But bullshit jobs form when productivity increases dramatically, which basically describes any form of work in which automation is going to have an impact. And that means that many of our jobs will become much more… bullshitty… in the future.
So – what would happen when robots take over all of our jobs? My guess is that mankind would just inflate the old jobs so that the work that can be done in ten minutes, will still engage workers for a full day. In short, we’ll all ‘work’ at bullshit jobs.
Here, I made a diagram of what it’ll look like. And you know where I did it? That’s right – at work, while answering questions on Quora.
Meet Omer, my five years old son (in the picture above). He will be remembered for as long as humanity exists.
That’s pretty neat, isn’t it?
Let me explain why. Think of the great inventors, leaders and scientists of ages past: people like Alexander the Great, Isaac Newton, Plato and others. Most of them did not have a personal biographer looking over their shoulders, to record their great deeds. Even for those who did hire such a personal biographer, we only know today what they wanted us to know.
Now consider Omer. He is growing up in a period of time in which he is being monitored continuously. All the pictures I took of him, almost since the very moment he was born, are stored in Google’s and Facebook’s servers, and are being maintained and looked after continuously, so that they will be preserved for a very long time indeed. Every purchase I made for him using my credit card, has been recorded somewhere by a data merchant, and the information was sold to other companies.
As my son grows up, his smartphone will record his activities and health, his electronic devices will keep a close watch over him, and aerial drones in the sky will be able to record his movements on the ground. All of this information will be gathered effortlessly, and will be easily analyzed by AI engines to construct a picture of my son’s life.
So – in the future, we will all be remembered and recognized. Maybe not for our great inventions or prowess in battle, but for our personal, small and intimate stories and achievements. My son will know me – his father – as a real human being, full of nuances and quirks. He will know what I did tonight before going to bed, which websites I visited (yes, even if I used incognito mode – the data is still being retained by my internet service provider and Google), and what made me the man I was. And his kids – my grandchildren – will know my son’s story even better than he will know mine. And so on and on, into future generations.
In a way, I will never die for my son, and neither will you. Our stories will remain here to teach our children the lessons we’ve learned over our lives.
A few years ago I lectured in a European workshop about global risks. Before me lectured one of the World Health Organization (WHO) chief officers, who presented a very interesting graph.
What he showed was basically that life expectancy is expected to keep on rising all over the world, so that by the year 2100 it’s going to reach 85–90 years in high-income countries.
Well, I was pretty astounded about that forecast, which seemed to me extremely pessimistic. I talked with him over lunch, and asked whether this forecast included all of the technologies currently being developed in university labs. I asked how the forecasts would be affected by –
The development of nano-robots that could hold back cancer, coronary thrombosis (heart attack), strokes and other diseases from inside the body;
Sophisticated techniques for genetic engineering, that could produce vaccines against cancer and other diseases;
Tissue engineering techniques that could repair entire tissues – sometimes while they’re still in the body;
Artificial intelligence engines that would provide real-time medical monitoring and consultation much more accurate than that of today’s best medical doctors;
I’m paraphrasing his answer a little, since it all happened a few years ago, but the gist of what he said was –
“No, we can’t take all that into account. The model can’t acknowledge medical breakthroughs. We know that such breakthroughs will have a dramatic impact, but we just don’t know when they’ll emerge from the lab. But I can tell you that if even 15% of the research currently being done in biomedical labs succeeds, then the forecasts will change dramatically.”
So – there is simply no good forecast that will answer the basic question of how long we’re supposed to remain alive in this century. It is entirely conceivable – indeed, even likely, as that WHO official admitted – that sometime in the next few decades, a ‘perfect storm’ of medical breakthroughs will work together to dramatically halt aging and put a stop to most old-age diseases.
Let’s start with a little challenge: which of the following tunes was composed by an AI, and which by an HI (Human Intelligence)?
I’ll tell you at the end of the answer which tune was composed by an AI and which by an HI. For now, if you’re like most people, you’re probably unsure. Both pieces of music are pleasing to the ear. Both have good rhythm. Both could be part of the soundtrack of a Hollywood film, and you would never know that one was composed by an AI.
And this is just the beginning.
In recent years, AI has managed to –
Compose a piece of music (Transits – Into an Abyss) that was performed by the London Symphony Orchestra and received praise from reviewers. [source: you can hear the performance in this link]
Identify emotions in photographs of people, and create an abstract painting that conveys these emotions to the viewer. The AI can even analyze the painting as it is being created, and decide whether it’s achieving its objectives [source: Rise of the Robots].
Create a movie trailer (it’s actually pretty good – watch it here).
Now, don’t get me wrong: most of these achievements don’t even come close to the level of an experienced human artist. But AI has something that humans don’t: it’s capable of training itself on millions of samples, and constantly improve itself. That’s how Alpha Go, the AI that recently wiped the floor with Go’s most proficient players, got so good at the game: it played a few million games against itself, and discovered new strategies and best moves. It acquired an intuition for the game, and kept rapidly evolving to improve itself.
And there’s no reason that AI won’t be able to do that in art as well.
In the next decade, we’ll see AI composing music and even poems, drawing abstract paintings, and writing books and movie scripts. And it’ll get better at it all the time.
So what happens to art, when AI can create it just as easily as human beings do?
For starters, we all benefit. In the future, when you’ll upload your new YouTube clip, you’ll be able to have the AI add original music to it, which will fit the clip perfectly. The AI will also write your autobiography just by going over your Facebook and Gmail history, and if you want – will turn it into a movie script and direct it too. It’ll create new comic books easily and automatically – both the script and the drawing and coloring part – and what’s more, it’ll fit each story to the themes that you like. You want to see Superman fighting the Furry Triple-Breasted Slot Machines of Pandora? You got it.
That’s what happens when you take a task that humans need to invest decades to become really good at, and let computers perform it quickly and efficiently. And as a result, even poor people will be able to have a flock of AI artists at their beck and call.
What Will the Artists Do?
At this point you may ask yourselves what all the human artists will do at that future. Well, the bad news is that obviously, we won’t need as many human artists. The good news is that those few human artists who are left, will make a fortune by leveraging their skills.
Let me explain what I mean by that. Homer is one of the earliest poets we know of. He was probably dirt poor. Why? Because he had to wander from inn to inn, and could only recite his work aloud for audiences of a few dozen people at the time, at most. Shakespeare was much more succesful: he could have his plays performed in front of hundreds of people at the same time. And Justin Bieber is a millionnaire, because he leverages his art with technology: once he produces a great song, everyone gets is immediately via YouTube or by paying for and downloading the song on iTunes.
Great composers will still exist in the future, and they will work at creating new kinds of music – and then having the AI create variations on that theme, and earning revenue from it. Great painters will redefine drawing and painting, and they will teach the AI to paint accordingly. Great script writers will create new styles of stories, whereas the old AI could only produce the ‘old style’.
And of course, every time a new art style is invented, it’ll only take AI a few years – or maybe just a few days – to teach itself that new style. But the human creative, crazy, charismatic artists who created that new style, will have earned the status of artistic super-stars by then: the people who changed our definitions of what is beautiful, ugly, true or false. They will be the people who really create art, instead of just making boring variations on a theme.
The truly best artists, the ones who can change our outlook about life and impact our thinking in completely unexpected ways, will still be here even a hundred years into the future.
“Surveys have shown that most Americans vastly underestimate the existing extent of inequality, and when asked to select an “ideal” national distribution of income, they make a choice that, in the real world, exists only in Scandinavian social democracies.”
The amazing thing is that most people simply don’t realize just how bad things are. Human beings have a tendency to compare their life quality with that of their neighbors and relatives, not with the millionaires and billionaires.
Surveys show that Americans generally believe that the top 20% of wealthy Americans possess just 59 percent of wealth [source]. Or that the bottom 40% possess 9 percent of wealth. This is nowhere near the truth (actually, the top 20% possess 84 percent of wealth, and the bottom 40% possess only 0.3 percent of wealth)[source].
Here’s How Bad Things Actually Are:
Between the years 1983 – 2009, Americans became more wealthy as a whole. But the bottom 80 percent of income earners saw a net decrease in their wealth.At the same time, the top 1 percent of income earners got more than 40 percent of the nation’s wealth increase.[source].
Overall, the earnings of the top 1 percent rose by 278 percent between 1979 and 2007. At the same time, the earnings of the median people (that’s probably you and me) only increased by 35 percent [source – The Second Machine Age].
Inequality (as measured by the CIA according to the GINI index) in the US is far more extreme than it is in places like Egypt, Croatia, Vietnam or Greece [source].
Between the years 2009 – 2012, 95 percent of total income gains went to the wealthiest 1 percent [source].
Economic mobility in the US – i.e. whether people can rise (or sink) from one economic class to another, is significantly lower in comparison to many European countries. If you were born to a family in the bottom 20% of income, you have a 42 percent chance of staying in that income level as an adult. Compare that to Denmark (25 percent chance) or even Britain (30 percent chance) [source]. That means that the American dream of achieving success through hard work is much more practical if you’re living in a Nordic country or even in the freaking monarchy of the United Kingdom.
Inequality also has implications for your life expectancy. Geographic inequality in life expectancy has increased between 1980 and 2014. Some counties in the US have a life expectancy lower by 20 years than the highest counties. Yes, you read that right. The average person in eastern Kentucky and southwestern West Virginia basically has twenty years less than a person in, say, central Colorado. And the disparity between the US counties shows no sign of stopping anytime soon [source].
What It All Means
Reading these statistics, you may say that inequality is just a symptom of the times and of technological progress, and there’s definitely some evidence for that.
You may highlight the fact that the ‘water rises for everyone’, and indeed – that’s true as well. Some may rise more rapidly than others, but in general over the last one hundred years, the average American’s life quality has risen.
You may even say that some billionaires, like Bill Gates and Mark Zuckerberg, are giving back their wealth to society. The data shows that the incredibly wealthy donate around 10% of their net worth over their lifetime. And again, that’s correct (and incredibly admirable).
The only problem is, all of these explanations doesn’t matter in the end. Because inequality still exists, and it has some unfortunate side effects: people may not realize exactly how bad it is, but they still feel it’s pretty bad. They realize that the rich keep on getting richer. They understand that the rich and wealthy have a large influence on the US congress and senate [source].
In short, they understand that the system is skewed, and not in their favor.
And so, they demand change. Any kind of change – just something that will shake the system upside down, and make the wealthy elites rethink everything they know. Populist politicians (and occasionally ones who really do want to make a difference) then use these yearnings to get elected.
Indeed, when you check out the candidate quality that mattered the most to voters in the 2016 US elections, you can see that the ability to bring about change is more important by far than other traits like “good judgement”, “experience” or even “cares about me”. And there you have it: from rampant inequality to the Trump regime.
Now, things may not be as bleak as they seem. Maybe Trump will work towards minimizing inequality. But even if he won’t (or can’t), I would like to think that the politicial system in the US has learned its lesson, and that the Democratic Party realized that in the next elections cycle they need to put inequality on their agenda, and find ways to fight it.
I was asked on Quora how the tanks of the future are going to be designed. Here’s my answer – I hope it’ll make you reflect once again on the future of war and what it entails.
And now, consider this: the Israeli Merkava Mark IV tank.
It is one of the most technologically advanced tanks in the world. It is armed with a massive 120 mm smoothbore gun that fires shells with immense explosive power, with two roof-mounted machine guns, and with a 60 mm mortar in case the soldiers inside really want to make a point. However, the tank has to be deployed on the field, and needs to reach its target. It also costs around $6 million.
Now consider this: the Israeli geek (picture taken from the Israeli reality show – Beauty and the Geek). The geek is the one on the left, in case you weren’t sure.
With the click of a button and the aid of some hacking software available on the Darknet, our humble Israeli geek can paralyze whole institutions, governments and critical infrastructures. He can derail trains (happened in Poland), deactivate sewage pumps and mix contaminated water with drinking water (happened in Texas), or even cut the power supply to tens of thousands of people (happened in Ukraine). And if that isn’t bad enough, he could take control over the enemy female citizens’ wireless vibrators and operate it to his and/or their satisfaction (potentially happened already).
Oh, and the Israeli geek works for free. Why? Because he loves hacking stuff. Just make sure you cover the licensing costs for the software he’s using, or he might hack your vibrator next.
So, you asked – “how will futuristic tanks be designed”?
I answer, “who cares”?
But Seriously Now…
When you’re thinking of the future, you have to realize that some paradigms are going to change. One of those paradigms is that of physical warfare. You see, tanks were created to do battle in a physical age, in which they had an important role: to protect troops and provide overwhelming firepower while bringing those troops wherever they needed to be. That was essentially the German blizkrieg strategy.
In the digital age, however, everything is connected to the internet, or very soon will be. Not just every computer, but every bridge, every building, every power plant and energy grid, and every car. And as security futurist Marc Goodman noted in his book Future Crimes, “when everything is connected, everything is vulnerable”. Any piece of infrastructure that you connect to the internet, immediately becomes vulnerable to hacking.
Now, here’s a question for you: what is the purpose of war?
I’ll give you a hint: it’s not about driving tanks with roaring engines around. It’s not about soldiers running and shooting in the field. It’s not even about dropping bombs from airplanes. All of the above are just tools for achieving the real purpose: winning the war by either making the enemy surrender to you, or neutralizing it completely.
And how do you neutralize the enemy? It’s quite simple: you demolish the enemy’s factories; you destroy their cities; you ruin your enemy’s citizens morale to the point where they can’t fight you anymore.
In the physical age, armies clashed on the field because each army was on the way to the other side’s cities and territory. That’s why you needed fast tanks with awesome armanent and armor. But today, in the digital age, hackers can leap straight over the battlefield, and make war directly between cities in real-time. They can shut down hospitals and power plants, kill everyone with a heart pacemaker or an insulin pump, and make trains and cars collide with each other. In short, they could shut down entire cities.
So again – who needs tanks?
I’m not saying there aren’t going to be tanks. The physical aspect of warfare still counts, and one can’t just disregard it. However, tanks simply don’t count as much in comparison to the cyber-security aspects of warfare (partly because tanks themselves are connected nowadays).
Again, that does not mean that tanks are useless. We still need to figure out the exact relationships between tanks and geeks, and precisely where, when and how needs to be deployed in the new digital age. But if you were to ask me in ten years what’s more important – the tank or the geek – then my bet would definitely be on the geek.
If this aspect of future warfare interests you, I invite you to read the two papers I’ve published in the European Journal of Futures Research and in Foresight, about future scenarios for crime and terror that rely on the internet of things.
I was recently asked on Quora whether there is some kind of a grand scheme to things: a destiny that we all share, a guiding hand that acts according to some kind of moral rules.
This is a great question, and one that we’re all worried about. While there’s no way to know for sure, the evidence points against this kind of fate-biased thinking – as a forecasting experiment funded by the US Department of Defense recently showed.
In 2011, the US Department of Defense began funding an unusual project: the Good Judgement Project. In this project, led by Philip E. Tetlock, Barbara Mellers and Don Moore, people were asked to volunteer their time and rate the chance of occurence for certain events. Overall, thousands of people took part in the exercise, and answered hundreds of questions over a time period of two years. Their answers were checked constantly, as soon as the events actually occurred.
After two years, the directors of the project identified a sub-type of people they called Superforecasters. These top forecasters were doing so well, that their predictions were 30% more accurate than those of intelligence officials who had access to highly classified information!
(and yes, for the statistics-lovers among us: the researchers absolutely did run statistical tests that showed the chances of those people being accidentally so accurate were miniscule. The superforecasters kept doing well, over and over again)
Once the researchers identified this subset of people, they began analyzing their personalities and methods of thinking. You can read about it in some of the papers about the research (attached at the end of this answer), as well as in the great book – Superforecasting: the Art and Science of Prediction. For this answer, the important thing to note is that those superforecasters were also tested for what I call “the fate bias”.
It’s obvious why we want to believe in fate. It gives our woes, and the sufferings of others, a special meaning. It justifies our pains, and makes us think that “it’s all for a reason”. Our belief in fate helps us deal with bereavement and with physical and mental pain.
But it also makes us lousy forecasters.
Fate is Incompatible with Accurate Forecasting
In the Good Judgement Project, the researchers ran tests on the participants to check for their belief in fate. They found out that the superforecasters utterly rejected fate. Even more significantly, the better an individual was at forecasting, the more inclined he was to reject fate. And the more he rejected fate, the more accurate he was at forecasting the future.
Fate is Incompatible with the Evidence
And so, it seems that fate is simply incompatible with the evidence. People who try to predict the occurrence of events in a ‘fateful’ way, as if they obeying a certain guiding hand, are prone to failure. On the other hand, those who believe there is no ‘higher order to things’ and plan accordingly, turn out to be usually right.
Does that mean there is no such thing as fate, or a grand scheme? Of course not. We can never disprove the existence of such a ‘grand plan’. What we can say with some certainty, however, is that human beings who claim to know what that plan actually is, seem to be constantly wrong – whereas those who don’t bother explaining things via fate, find out that reality agrees with them time and time again.
So there may be a grand plan. We may be in a movie, or God may be looking down on us from up above. But if that’s the case, it’s a god we don’t understand, and the plan – if there actually is one – is completely undecipherable to us. As Neil Gaiman and the late Terry Pratchett beautifully wrote –
God does not play dice with the universe; He plays an ineffable game of His own devising… an obscure and complex version of poker in a pitch-dark room, with blank cards, for infinite stakes, with a Dealer who won’t tell you the rules, and who smiles all the time.
And if that’s the case, I’d rather just say outloud – “I don’t believe in fate”, and plan and invest accordingly.
You’ll simply have better success that way. And when the universe is cheating at poker with blank cards, Heaven knows you need all the help you can get.
For further reading, here are links to some interesting papers about the Good Judgement Project and the insights derived from it –
We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.
Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.
Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.
Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.
Or maybe they did.
Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.
Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.
But it didn’t.
Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.
And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.
What Does It All Mean?
Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.
Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.
I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.
Robots and Urges
I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.
In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?
Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.
Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.
And when that happens, you’ll finally know what being a futurist feels like.
OK, so I know the headline to this post isn’t really the sort a stable and serious scientist, or even a futurist, should be asking. But I was asked this question in Quora, and thought it warranted some thought. So here’s my answer to this mystery that had hounded movie directors for the last century or so!
If Japan actually managed to create the huge robots / exoskeletons so favored in the anime genre, all the generals in all the opposing armies would stand up and clap wildly for them. Because these robots are practically the worst war-machines ever. And believe it or not, I know that because we conducted an actual research into this area, together with Dr. Aharon Hauptman and Dr. Liran Antebi,
But before I tell you about that research, let me say a few words about the woes of huge humanoid robots.
First, there are already some highly sophisticated exoskeleton suits developed by major military contractors like Raytheon’s XOS2 and Lockheed Martin’s HULC. While they’re definitely the coolest thing since sliced bread and frosted donuts, they have one huge disadvantage: they need plenty of energy to work. As long as you can connect them to a powerline, it shouldn’t be too much of an issue. But once you ask them to go out to the battlefield… well, after one hour at most they’ll stop working, and quite likely trap the human operating them.
Some companies, like Boston Dynamics, have tried to overcome the energy challenge by adding a diesel engine to their robots. Which is great, except for the fact that it’s still pretty cumbersome, and extremely noisy. Not much use for robots that are supposed to accompany marines on stealth missions.
But who wants stealthy robots, anyway? We’re talking about gargantuan robots, right?!
Well, here’s the thing: the larger and heavier the robot is, the more energy you need to operate it. That means you can’t really add much armor to it. And the larger you make it, the more unwieldy it becomes. There’s a reason elephants are so sturdy, with thick legs – that’s the only way they can support their enormous body weight. Huge robots, which are much heavier than elephants, can’t even have legs with joints. When the MK. II Mech was exposed at Maker Faire 2015, it reached a height of 15 feet, weighed around 6 tons… and could only move by crawling on a caterpillar track. So, in short, it was a tank.
And don’t even think about it rising to the air. Seriously. Just don’t.
But let’s say you manage to somehow bypass all of those pesky energy constraints. Even in that case, huge humanoid robots would not be a good idea because of two main reasons: shape, and size.
Let’s start with shape. The human body had evolved the way it is – limbs, groin, hair and all – to cope with the hardships of life on the one hand, while also being able to have sex, give birth and generally doing fun stuff. But robots aren’t supposed to be doing fun stuff. Unless, that is, you want to build a huge Japanese humanoid sex robot. And yes, I know that sounds perfectly logical for some horribly unfathomable reason, but that’s not what the question is about.
So – if you want a battle-robot, you just don’t need things like legs, a groin, or even a head with a vulnerable computer-brain. You don’t need a huge multifunctional battle-robot. Instead, you want small and efficient robots that are uniquely suited to the task set for them. If you want to drop bombs, use a bomber drone. If you want to kill someone, use a simple robot with a gun. Heck, it can look like a child’s toy, or like a ball, but what does it matter? It just needs to get the job done!
Last but not least, large humanoid robots are not only inefficient, cumbersome and impractical, but are also extremely vulnerable to being hit. One solid hit to the head will take them out. Or to a leg. Or the torso. Or the groin of that gargantuan Japanese sex-bot that’s still wondering why it was sent to a battlefield where real tanks are doing all the work. That’s why armies around the world are trying to figure out how to use swarms of drones instead of deploying one large robot: if one drone takes the hit, the rest of the swarm still survives.
So now that I’ve thrown cold ice water on the idea of large Japanese humanoid robots, here’s the final rub. A few years ago I was part of a research along with Dr. Aharon Hauptman and Dr. Liran Antebi, that was meant to assess the capabilities that robots will possess in the next twenty years. I’ll cut straight to the chase: the experts we interviewed and surveyed believed that in twenty years or less we’ll have –
Robots with perfect camouflage capabilities in visible light (essentially invisibility);
Robots that can heal themselves, or use objects from the environment as replacement parts;
One of the only categories about which the experts were skeptical was that of “transforming platforms” – i.e. robots that can change shape to adapt themselves to different tasks. There is just no need for these highly-versatile (and expensive, inefficient and vulnerable) robots, when you can send ten other highly-specialized robots to perform each task at a turn. Large humanoid robots are the same. There’s just no need for them in warfare.
So, to sum things up: if Japan were to construct anime-style Gundam-like robots and send them to war, I really hope they prepare them for having sex, because they would be screwed over pretty horribly.