Who’ll Win the Next War: the Tank or the Geek?

I was asked on Quora how the tanks of the future are going to be designed. Here’s my answer – I hope it’ll make you reflect once again on the future of war and what it entails.

And now, consider this: the Israeli Merkava Mark IV tank.

Merkava4_MichaelMass02.jpg
Merkava Mark IV. Source: Michael Mass, Yad La-Shiryon, found on Wikipedia

It is one of the most technologically advanced tanks in the world. It is armed with a massive 120 mm smoothbore gun that fires shells with immense explosive power, with two roof-mounted machine guns, and with a 60 mm mortar in case the soldiers inside really want to make a point. However, the tank has to be deployed on the field, and needs to reach its target. It also costs around $6 million.

Now consider this: the Israeli geek (picture taken from the Israeli reality show – Beauty and the Geek). The geek is the one on the left, in case you weren’t sure.

120101803_0937500_0..jpg
The common Israeli Geek. He’s the one on the left of the picture. Source: Israeli reality show – Beauty and the Geek.

With the click of a button and the aid of some hacking software available on the Darknet, our humble Israeli geek can paralyze whole institutions, governments and critical infrastructures. He can derail trains (happened in Poland), deactivate sewage pumps and mix contaminated water with drinking water (happened in Texas), or even cut the power supply to tens of thousands of people (happened in Ukraine). And if that isn’t bad enough, he could take control over the enemy female citizens’ wireless vibrators and operate it to his and/or their satisfaction (potentially happened already).

Oh, and the Israeli geek works for free. Why? Because he loves hacking stuff. Just make sure you cover the licensing costs for the software he’s using, or he might hack your vibrator next.

So, you asked – “how will futuristic tanks be designed”?

I answer, “who cares”?

 

But Seriously Now…

When you’re thinking of the future, you have to realize that some paradigms are going to change. One of those paradigms is that of physical warfare. You see, tanks were created to do battle in a physical age, in which they had an important role: to protect troops and provide overwhelming firepower while bringing those troops wherever they needed to be. That was essentially the German blizkrieg strategy.

In the digital age, however, everything is connected to the internet, or very soon will be. Not just every computer, but every bridge, every building, every power plant and energy grid, and every car. And as security futurist Marc Goodman noted in his book Future Crimes, “when everything is connected, everything is vulnerable”. Any piece of infrastructure that you connect to the internet, immediately becomes vulnerable to hacking.

Now, here’s a question for you: what is the purpose of war?

I’ll give you a hint: it’s not about driving tanks with roaring engines around. It’s not about soldiers running and shooting in the field. It’s not even about dropping bombs from airplanes. All of the above are just tools for achieving the real purpose: winning the war by either making the enemy surrender to you, or neutralizing it completely.

And how do you neutralize the enemy? It’s quite simple: you demolish the enemy’s factories; you destroy their cities; you ruin your enemy’s citizens morale to the point where they can’t fight you anymore.

In the physical age, armies clashed on the field because each army was on the way to the other side’s cities and territory. That’s why you needed fast tanks with awesome armanent and armor. But today, in the digital age, hackers can leap straight over the battlefield, and make war directly between cities in real-time. They can shut down hospitals and power plants, kill everyone with a heart pacemaker or an insulin pump, and make trains and cars collide with each other. In short, they could shut down entire cities.

So again – who needs tanks?

 

And Still…

I’m not saying there aren’t going to be tanks. The physical aspect of warfare still counts, and one can’t just disregard it. However, tanks simply don’t count as much in comparison to the cyber-security aspects of warfare (partly because tanks themselves are connected nowadays).

Again, that does not mean that tanks are useless. We still need to figure out the exact relationships between tanks and geeks, and precisely where, when and how needs to be deployed in the new digital age. But if you were to ask me in ten years what’s more important – the tank or the geek – then my bet would definitely be on the geek.

 


If this aspect of future warfare interests you, I invite you to read the two papers I’ve published in the European Journal of Futures Research and in Foresight, about future scenarios for crime and terror that rely on the internet of things.

Should You Consider Fate when Planning Ahead?

I was recently asked on Quora whether there is some kind of a grand scheme to things: a destiny that we all share, a guiding hand that acts according to some kind of moral rules.

This is a great question, and one that we’re all worried about. While there’s no way to know for sure, the evidence points against this kind of fate-biased thinking – as a forecasting experiment funded by the US Department of Defense recently showed.

In 2011, the US Department of Defense began funding an unusual project: the Good Judgement Project. In this project, led by Philip E. Tetlock, Barbara Mellers and Don Moore, people were asked to volunteer their time and rate the chance of occurence for certain events. Overall, thousands of people took part in the exercise, and answered hundreds of questions over a time period of two years. Their answers were checked constantly, as soon as the events actually occurred.

After two years, the directors of the project identified a sub-type of people they called Superforecasters. These top forecasters were doing so well, that their predictions were 30% more accurate than those of intelligence officials who had access to highly classified information!

(and yes, for the statistics-lovers among us: the researchers absolutely did run statistical tests that showed the chances of those people being accidentally so accurate were miniscule. The superforecasters kept doing well, over and over again)

Once the researchers identified this subset of people, they began analyzing their personalities and methods of thinking. You can read about it in some of the papers about the research (attached at the end of this answer), as well as in the great book – Superforecasting: the Art and Science of Prediction. For this answer, the important thing to note is that those superforecasters were also tested for what I call “the fate bias”.

Neither one seems to work. Sorry ’bout that.

The Fate Bias

There’s no denying that most people believe in fate of some sort: a guiding hand that makes everything happen for a reason, in accordance with some grand scheme or moral rules. This tendency seems to manifest itself most strongly in children, and in God-believers (84.8 percent of whom believe in fate), but even 54.3 percent of atheists believe in fate.

It’s obvious why we want to believe in fate. It gives our woes, and the sufferings of others, a special meaning. It justifies our pains, and makes us think that “it’s all for a reason”. Our belief in fate helps us deal with bereavement and with physical and mental pain.

But it also makes us lousy forecasters.

 

Fate is Incompatible with Accurate Forecasting

In the Good Judgement Project, the researchers ran tests on the participants to check for their belief in fate. They found out that the superforecasters utterly rejected fate. Even more significantly, the better an individual was at forecasting, the more inclined he was to reject fate. And the more he rejected fate, the more accurate he was at forecasting the future.

 

Fate is Incompatible with the Evidence

And so, it seems that fate is simply incompatible with the evidence. People who try to predict the occurrence of events in a ‘fateful’ way, as if they obeying a certain guiding hand, are prone to failure. On the other hand, those who believe there is no ‘higher order to things’ and plan accordingly, turn out to be usually right.

Does that mean there is no such thing as fate, or a grand scheme? Of course not. We can never disprove the existence of such a ‘grand plan’. What we can say with some certainty, however, is that human beings who claim to know what that plan actually is, seem to be constantly wrong – whereas those who don’t bother explaining things via fate, find out that reality agrees with them time and time again.

So there may be a grand plan. We may be in a movie, or God may be looking down on us from up above. But if that’s the case, it’s a god we don’t understand, and the plan – if there actually is one – is completely undecipherable to us. As Neil Gaiman and the late Terry Pratchett beautifully wrote –

God does not play dice with the universe; He plays an ineffable game of His own devising… an obscure and complex version of poker in a pitch-dark room, with blank cards, for infinite stakes, with a Dealer who won’t tell you the rules, and who smiles all the time.

And if that’s the case, I’d rather just say outloud – “I don’t believe in fate”, and plan and invest accordingly.

You’ll simply have better success that way. And when the universe is cheating at poker with blank cards, Heaven knows you need all the help you can get.

 


 

For further reading, here are links to some interesting papers about the Good Judgement Project and the insights derived from it –

Bringing probability judgments into policy debates via forecasting tournaments

Superforecasting: How to Upgrade Your Company’s Judgment

Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions

Psychological Strategies for Winning a Geopolitical Forecasting Tournament

Rethinking the training of intelligence analysts

 

The Little Military Drone that Could

We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.

Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.

shadow uav.jpg
Shady the drone. Source: Department of Defense

Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.

Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.

Or maybe they did.

Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.

Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.

But it didn’t.

Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.

rocky_mountains_sml.jpg
The Rocky Mountains. Shady crossed them too.

And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.

 

What Does It All Mean?

Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.

Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.

PredatorFire.png
A Predator drone firing away. 

I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.

 

Robots and Urges

I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.

In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?

Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.

 

Scared? Take Action!

If this revelation scares you as well, you may want to sign the open letter that the Future of Life Institute released around a year and a half ago, against the use of autonomous weapons in war. You won’t be alone out there: more than a thousand AI researchers have already signed that letter.

Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.

And when that happens, you’ll finally know what being a futurist feels like.

 

 

 

Should We Actually Use Huge Japanese Robots in Warfare?

OK, so I know the headline to this post isn’t really the sort a stable and serious scientist, or even a futurist, should be asking. But I was asked this question in Quora, and thought it warranted some thought. So here’s my answer to this mystery that had hounded movie directors for the last century or so!

If Japan actually managed to create the huge robots / exoskeletons so favored in the anime genre, all the generals in all the opposing armies would stand up and clap wildly for them. Because these robots are practically the worst war-machines ever. And believe it or not, I know that because we conducted an actual research into this area, together with Dr. Aharon Hauptman and Dr. Liran Antebi,

But before I tell you about that research, let me say a few words about the woes of huge humanoid robots.

First, there are already some highly sophisticated exoskeleton suits developed by major military contractors like Raytheon’s XOS2 and Lockheed Martin’s HULC. While they’re definitely the coolest thing since sliced bread and frosted donuts, they have one huge disadvantage: they need plenty of energy to work. As long as you can connect them to a powerline, it shouldn’t be too much of an issue. But once you ask them to go out to the battlefield… well, after one hour at most they’ll stop working, and quite likely trap the human operating them.

Some companies, like Boston Dynamics, have tried to overcome the energy challenge by adding a diesel engine to their robots. Which is great, except for the fact that it’s still pretty cumbersome, and extremely noisy. Not much use for robots that are supposed to accompany marines on stealth missions.

Robots: Left – Raytheon’s XOS2 exoskeleton suit; Upper right – Lockheed Martin’s HULC; Bottom right – Boston Dynamics’ Alpha Dog.

 

But who wants stealthy robots, anyway? We’re talking about gargantuan robots, right?!

Well, here’s the thing: the larger and heavier the robot is, the more energy you need to operate it. That means you can’t really add much armor to it. And the larger you make it, the more unwieldy it becomes. There’s a reason elephants are so sturdy, with thick legs – that’s the only way they can support their enormous body weight. Huge robots, which are much heavier than elephants, can’t even have legs with joints. When the MK. II Mech was exposed at Maker Faire 2015, it reached a height of 15 feet, weighed around 6 tons… and could only move by crawling on a caterpillar track. So, in short, it was a tank.

And don’t even think about it rising to the air. Seriously. Just don’t.

Megabots’ MK. II Mech, complete with the quiessential sexy pilot.

But let’s say you manage to somehow bypass all of those pesky energy constraints. Even in that case, huge humanoid robots would not be a good idea because of two main reasons: shape, and size.

Let’s start with shape. The human body had evolved the way it is – limbs, groin, hair and all – to cope with the hardships of life on the one hand, while also being able to have sex, give birth and generally doing fun stuff. But robots aren’t supposed to be doing fun stuff. Unless, that is, you want to build a huge Japanese humanoid sex robot. And yes, I know that sounds perfectly logical for some horribly unfathomable reason, but that’s not what the question is about.

So – if you want a battle-robot, you just don’t need things like legs, a groin, or even a head with a vulnerable computer-brain. You don’t need a huge multifunctional battle-robot. Instead, you want small and efficient robots that are uniquely suited to the task set for them. If you want to drop bombs, use a bomber drone. If you want to kill someone, use a simple robot with a gun. Heck, it can look like a child’s toy, or like a ball, but what does it matter? It just needs to get the job done!

You don’t need a gargantuan Japanese robot for battle. You can even use robots as small as General Robotics’ Dogo: basically a small tank the size of your foot, that carries a glock pistol and can use it efficiently.

Last but not least, large humanoid robots are not only inefficient, cumbersome and impractical, but are also extremely vulnerable to being hit. One solid hit to the head will take them out. Or to a leg. Or the torso. Or the groin of that gargantuan Japanese sex-bot that’s still wondering why it was sent to a battlefield where real tanks are doing all the work. That’s why armies around the world are trying to figure out how to use swarms of drones instead of deploying one large robot: if one drone takes the hit, the rest of the swarm still survives.

So now that I’ve thrown cold ice water on the idea of large Japanese humanoid robots, here’s the final rub. A few years ago I was part of a research along with Dr. Aharon Hauptman and Dr. Liran Antebi, that was meant to assess the capabilities that robots will possess in the next twenty years. I’ll cut straight to the chase: the experts we interviewed and surveyed believed that in twenty years or less we’ll have –

  • Robots with perfect camouflage capabilities in visible light (essentially invisibility);
  • Robots that can heal themselves, or use objects from the environment as replacement parts;
  • Biological robots.

One of the only categories about which the experts were skeptical was that of “transforming platforms” – i.e. robots that can change shape to adapt themselves to different tasks. There is just no need for these highly-versatile (and expensive, inefficient and vulnerable) robots, when you can send ten other highly-specialized robots to perform each task at a turn. Large humanoid robots are the same. There’s just no need for them in warfare.

So, to sum things up: if Japan were to construct anime-style Gundam-like robots and send them to war, I really hope they prepare them for having sex, because they would be screwed over pretty horribly.