The Little Military Drone that Could

We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.

Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.

shadow uav.jpg
Shady the drone. Source: Department of Defense

Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.

Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.

Or maybe they did.

Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.

Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.

But it didn’t.

Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.

rocky_mountains_sml.jpg
The Rocky Mountains. Shady crossed them too.

And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.

 

What Does It All Mean?

Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.

Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.

PredatorFire.png
A Predator drone firing away. 

I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.

 

Robots and Urges

I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.

In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?

Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.

 

Scared? Take Action!

If this revelation scares you as well, you may want to sign the open letter that the Future of Life Institute released around a year and a half ago, against the use of autonomous weapons in war. You won’t be alone out there: more than a thousand AI researchers have already signed that letter.

Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.

And when that happens, you’ll finally know what being a futurist feels like.

 

 

 

Should We Actually Use Huge Japanese Robots in Warfare?

OK, so I know the headline to this post isn’t really the sort a stable and serious scientist, or even a futurist, should be asking. But I was asked this question in Quora, and thought it warranted some thought. So here’s my answer to this mystery that had hounded movie directors for the last century or so!

If Japan actually managed to create the huge robots / exoskeletons so favored in the anime genre, all the generals in all the opposing armies would stand up and clap wildly for them. Because these robots are practically the worst war-machines ever. And believe it or not, I know that because we conducted an actual research into this area, together with Dr. Aharon Hauptman and Dr. Liran Antebi,

But before I tell you about that research, let me say a few words about the woes of huge humanoid robots.

First, there are already some highly sophisticated exoskeleton suits developed by major military contractors like Raytheon’s XOS2 and Lockheed Martin’s HULC. While they’re definitely the coolest thing since sliced bread and frosted donuts, they have one huge disadvantage: they need plenty of energy to work. As long as you can connect them to a powerline, it shouldn’t be too much of an issue. But once you ask them to go out to the battlefield… well, after one hour at most they’ll stop working, and quite likely trap the human operating them.

Some companies, like Boston Dynamics, have tried to overcome the energy challenge by adding a diesel engine to their robots. Which is great, except for the fact that it’s still pretty cumbersome, and extremely noisy. Not much use for robots that are supposed to accompany marines on stealth missions.

Robots: Left – Raytheon’s XOS2 exoskeleton suit; Upper right – Lockheed Martin’s HULC; Bottom right – Boston Dynamics’ Alpha Dog.

 

But who wants stealthy robots, anyway? We’re talking about gargantuan robots, right?!

Well, here’s the thing: the larger and heavier the robot is, the more energy you need to operate it. That means you can’t really add much armor to it. And the larger you make it, the more unwieldy it becomes. There’s a reason elephants are so sturdy, with thick legs – that’s the only way they can support their enormous body weight. Huge robots, which are much heavier than elephants, can’t even have legs with joints. When the MK. II Mech was exposed at Maker Faire 2015, it reached a height of 15 feet, weighed around 6 tons… and could only move by crawling on a caterpillar track. So, in short, it was a tank.

And don’t even think about it rising to the air. Seriously. Just don’t.

Megabots’ MK. II Mech, complete with the quiessential sexy pilot.

But let’s say you manage to somehow bypass all of those pesky energy constraints. Even in that case, huge humanoid robots would not be a good idea because of two main reasons: shape, and size.

Let’s start with shape. The human body had evolved the way it is – limbs, groin, hair and all – to cope with the hardships of life on the one hand, while also being able to have sex, give birth and generally doing fun stuff. But robots aren’t supposed to be doing fun stuff. Unless, that is, you want to build a huge Japanese humanoid sex robot. And yes, I know that sounds perfectly logical for some horribly unfathomable reason, but that’s not what the question is about.

So – if you want a battle-robot, you just don’t need things like legs, a groin, or even a head with a vulnerable computer-brain. You don’t need a huge multifunctional battle-robot. Instead, you want small and efficient robots that are uniquely suited to the task set for them. If you want to drop bombs, use a bomber drone. If you want to kill someone, use a simple robot with a gun. Heck, it can look like a child’s toy, or like a ball, but what does it matter? It just needs to get the job done!

You don’t need a gargantuan Japanese robot for battle. You can even use robots as small as General Robotics’ Dogo: basically a small tank the size of your foot, that carries a glock pistol and can use it efficiently.

Last but not least, large humanoid robots are not only inefficient, cumbersome and impractical, but are also extremely vulnerable to being hit. One solid hit to the head will take them out. Or to a leg. Or the torso. Or the groin of that gargantuan Japanese sex-bot that’s still wondering why it was sent to a battlefield where real tanks are doing all the work. That’s why armies around the world are trying to figure out how to use swarms of drones instead of deploying one large robot: if one drone takes the hit, the rest of the swarm still survives.

So now that I’ve thrown cold ice water on the idea of large Japanese humanoid robots, here’s the final rub. A few years ago I was part of a research along with Dr. Aharon Hauptman and Dr. Liran Antebi, that was meant to assess the capabilities that robots will possess in the next twenty years. I’ll cut straight to the chase: the experts we interviewed and surveyed believed that in twenty years or less we’ll have –

  • Robots with perfect camouflage capabilities in visible light (essentially invisibility);
  • Robots that can heal themselves, or use objects from the environment as replacement parts;
  • Biological robots.

One of the only categories about which the experts were skeptical was that of “transforming platforms” – i.e. robots that can change shape to adapt themselves to different tasks. There is just no need for these highly-versatile (and expensive, inefficient and vulnerable) robots, when you can send ten other highly-specialized robots to perform each task at a turn. Large humanoid robots are the same. There’s just no need for them in warfare.

So, to sum things up: if Japan were to construct anime-style Gundam-like robots and send them to war, I really hope they prepare them for having sex, because they would be screwed over pretty horribly.

Garbage, Trash, and the Future of Jobs

“Hey, wake up! You’ve got to see something amazing!” I gently wake up my four years old son.

He opens his eyes and mouth in a yawn. “Is it Transformers?” He asks hopefully.

“Even better!” I promise him. “Come outside to the porch with me and you’ll see for yourself!”

He dashes outside with me. Out in the street, Providence’s garbage truck is taking care of the trash bins in a completely robotic fashion. Here’s the evidence I shot it so you can see for yourself. –

 

The kid glares at me. “That’s not a Transformer.” He says.

“It’s a vehicle with a robotic arm that grabs the trash bins, lifts them up in the air and empties them into the truck.” I argue. “And then it even returns the bins to their proper place. And you really should take note of this, kiddo, because every detail in this scene provides hints about the way you’ll work in the future, and how the job market will look like.”

“What’s a job?” He asks.

I choose to ignore that. “Here are the most important points. First, routine tasks become automated. Routine tasks are those that need to be repeated without too much of a variation in between, and can therefore be easily handled by machines. In fact, that’s what the industrial revolution was all about – machines doing human menial labor more efficiently than human workers on a massive scale. But in last few decades machines have shown themselves capable of taking more and more routine tasks on themselves. And very soon we’ll see tasks that have been considered non-routine in the past, like controlling a car, being relegated to robots. So if you want to have a job in the future, try to find something that isn’t routine – a job that requires mental agility and finding solutions to new challenges every day.”

He’s decidedly rubbing his eyes, but I’m on the horse now.

“Second, we’ll still need workers, but not as many. Science fiction authors love writing about a future in which nobody will ever need to work, and robots will serve us all. Maybe this future will come to pass, but on the way there we’ll still need human workers to bridge the gap between ancient and novel systems. In the garbage car, for example, the robotic arm replaces two or three workers, but we still need the driver to pilot the vehicle – which is ancient technology – and to deal with unexpected scenarios. Even when the vehicle will be completely autonomous and won’t need a driver, a few workers will still be needed to be on alert: they’ll be called to places where the car has malfunctioned, or where the AI has identified a situation it’s incapable or unauthorized to deal with. So there will still be human workers, just not as many as we have today.”

He opens his mouth for a yawn again, but I cut him short. “Never show them you’re tired! Which brings me to the third point: in the future, we’ll need fewer workers – but of high caliber. Each worker will carry a large burden on his or her shoulders. Take this driver, for example: he needs to stop in the exact spot in front of every bin, operate the robotic arm and make sure nothing gets messy. In the past, the drivers didn’t need to have all that responsibility because the garbage workers who rode in the best of the truck did most of the work. The modern driver also had to learn to operate the new vehicle with the robotic arm, so it’s clear that he is learning and adapting to new technologies. These are skills that you’ll need to learn and acquire for yourself. And when will you learn them?!”

“In the future.” He recites by rote in a toneless voice. “Can I go back to sleep now?”

“Never.” I promise him. “You have to get upgraded – or be left behind. Take a look at those two bins on the pavement. The robotic arm can only pick up one of them – and it’s the one that comes in the right size. The other bin is being left unattended, and has to wait until the primitive human can come and take care of it. In other words, only the upgraded bin receives the efficient and rapid treatment by the garbage truck. So unless you want to stay like that other trash bin way behind, you have to prepare for the future and move along with it – or everyone else will leap ahead of you.”

He nods with drooping lids, and yawns again. I allow him to complete this yawn, at least.

“OK daddy.” He says. “Now can I go back to bed?”

I stare at him for a few more moments, while my mind returns from the future to the present.

“Yes,” I smile sadly at him. “Go back to bed. The future will wait patiently for you to grow up.”

My gaze follows him as he goes back to him room, and the smile melts from my lips. He’s still just four years old, and will learn all the skills that he needs to handle the future world as he grows up.

For him, the future will wait patiently.

For others – like those unneeded garbage workers – it’s already here.

 

Don’t Tell Me Not To Make Love with My Robot

Pepper is one of the most sophisticated household robots in existence today. It has a body shape that reminds one of a prepubescent child, only reaching a height of 120 centimeters, and with a tablet on its chest. It constantly analyzes its owner’s emotions according to their speech, facial expressions and gestures, and responds accordingly. It also learns – for example, by analyzing which modes of behavior it can enact in order to make its owner feel better. It can even use its hands to hug people.

pepper-robot-sale
Pepper the Robot. Source: The Monitor Daily.

No wonder that when the first 1,000 Pepper units were offered for sale in Japan for $1,600, they were all sold in one minute. Pepper is now the most famous household robot in the world.

Pepper is probably also the only robot you’re not allowed to have sex with.

According to the contract, written in Japanese legal speak and translated to English, users are not allowed to perform –

“(4) Acts for the purpose of sexual or indecent behavior, or for the purpose of associating with unacquainted persons of the opposite sex.”

What does this development mean? Here is the summary, in just three short points.

 

First Point: Is Pepper Being Used for Surveillance?

First, one has to wonder just how SoftBank, the robot’s distributors in Japan, is going to keep tabs on whether the robot has been sexually used or not. Since Pepper’s price includes a $200 monthly “data and insurance fee”, it’s a safe bet that every Pepper unit is transmitting some of its data back to SoftBank’s servers. That’s not necessarily a bad thing: as I’ve written in Four Robot Myths it’s Time We Let Go of, robots can no longer be seen as individual units. Instead, they are a form of a hive brain, relying on each other’s experience and insights to guide their behavior. In order to do that, they must be connected to the cloud.

This is obviously a form of surveillance. Pepper is sophisticated enough to analyze its owner’s emotions and responses, and can thus deliver a plethora of information to SoftBank, advertisers and even government authorities. The owners could probably activate a privacy mode (if there’s not a privacy mode now, it will almost certainly be added in the near future by common demand), but the rest of the time their behavior will be under close scrutiny. Not necessarily because SoftBank is actually interested in what you’re doing in your houses, but simply because it wants to improve the robots.

And, well, also because it may not want you to have sex with them.

This is where things get bizarre. It is almost certainly the case that if SoftBank wished to, it could set up a sex alarm to blare up autonomously if Pepper is repeatedly exposed to sexual acts. There doesn’t even have to be a human in the loop – just train the AI engine behind Pepper on a large enough number of porn and erotic movies, and pretty soon the robot will be able to tell by itself just what the owner is dangling in front of its cameras.

The rest of the tale is obvious: the robot will complain to SoftBank via the cloud, but will do so without sharing any pictures or videos it’s taken. In other words, it won’t share information but only its insights and understandings of what’s been going on in that house. SoftBank might issue a soft warning to the owner, asking it to act more coyly around Pepper. If such chastity alerts keep coming up, though, SoftBank might have to retrieve Pepper from that house. And almost certainly, it will not allow other Pepper units to learn from the one that has been exposed to sexual acts.

And here’s the rub: if SoftBank wants to keep on developing its robots, they must learn from each other, and thus they must be connected to the cloud. But as long as SoftBank doesn’t want them to learn how to engage in sexual acts, it will have to set some kind of a filter – meaning that the robots will have to learn to recognize sexual acts, and refuse to talk about them with other robots. And silence, in the case of an always-operational robot, is as good as any testimony.

So yes, SoftBank will know when you’re having sex with Pepper.

I’ve written extensively in the past about the meaning of private property being changed, as everything are being connected to the cloud. Tesla are selling you a car, but they still control some parts of it. Google are selling you devices for controlling your smart house – which they then can (and do) shut down from a distance. And yes, SoftBank is selling you a robot which becomes your private property – as long as you don’t do anything with it that SoftBank doesn’t like you to.

And that was only the first point.

 

Second Point: Is Sex the Answer, or the Question?

There’s been some public outrage recently about sex with robots, with an actual campaign against using robots as sex objects. I sent the leaders of the campaign, Kathleen Richardson and Erik Brilling, several questions to understand the nature of their issues with the robots. They have not answered my questions, but according to their campaign website it seems that they equate ‘robot prostitution’ with human prostitution.

“But robots don’t feel anything.” You might say now. “They don’t have feelings, or dignity of their own. Do they?”

Let’s set things straight: sexual abuse is among the most horrible things any human can do to another. The abuser is causing both temporary and permanent injury to the victim’s body and mind. That’s why we call it an abuse. But if there are no laws to protect a robot’s body, and no mind to speak of, why should we care whether someone uses a robot in a sexual way?

Richardson’s and Brilling basically claim that it doesn’t matter whether the robots are actually experiencing the joys of coitus or suffering the ignominy of prostitution. The mere fact that people will use robots in the shape of children or women for sexual release will serve to perpetuate our current society model in which women and children are being sexually abused.

Let’s approach the issue from another point of view, though. Could sex with robots actually prevent some cases of sexual abuse?

maxresdefault (1)
… Or lovers? Source: Redsilverj

Assuming that robots can provide a high-quality sexual experience to human beings, it seems reasonable that some pent-up sexual tensions can be relieved using sex robots. There are arguments that porn might actually deter sexual violence, and while the debate is nowhere near to conclusion on that point, it’s interesting to ask: if robots can actually relieve human sexual tensions, and thus deter sexual violence against other human beings – should we allow that to happen, even though it objectifies robots, any by association, women and children as well?

I would wait for more data to come in on this subject before I actually advocate for sex with robots, but in the meantime we should probably refrain from making judgement on people who have sex with robots. Who knows? It might actually serve a useful purpose even in the near future. Which brings me to the third point –

 

Third Point: Don’t You Tell Me Not to have Sex with MY Robot

And really, that’s all there is to it.

 

Robit: A New Contender in the Field of House Robots

The field of house robots is abuzz in the last two years. It began with Jibo – the first cheap house robot that was originally advertised on Indiegogo and gathered nearly $4 million. Jibo doesn’t look at all like Asimov’s vision of humanoid robots. Instead, it resembles a small cartoon-like version of Eve from the Wall-E movie. Jibo can understand voice commands, recognize and track faces, and even take pictures of family members and speak and interact with them. It can do all that for just $750 – which seems like a reasonable deal for a house robot. Romo is another house robot for just $150 or so, with a cute face and a quirky attitude, which has sadly gone out of production last year.

 

robots.jpg
Pictures of house robots: Pepper (~$1,600), Jibo (~$750), Romo (~$130). Image on the right originally from That’s Really Possible.

 

Now comes a new contender in the field of house robots: Robit, “The Robot That Gets Things Done”. It moves around the house on its three wheels, wakes you up in the morning, looks after lost items like your shoes or keys on the floor, detects smoke and room temperature, and even delivers beer for you on a tray. And it’s doing all that for just $349 on Indiegogo.

robit.gif

I interviewed Shlomo Schwarcz, co-founder & CEO at Robit Robot, about Robit and the present and future of house robots. Schwarcz emphasized that unlike Jibo, Robit is not supposed to be a ‘social robot’. You’re not supposed to talk with it or have a meaningful relationship with it. Instead, it is your personal servant around the house.

“You choose the app (guard the house, watch your pet, play a game, dance, track objects, find your list keys, etc.) and Robit does it. We believe people want a Robit that can perform useful things around the house rather than just chat.”

It’s an interesting choice, and it seems that other aspects of Robit conform to it. While Jibo and Romo are pleasant to look at, Robit’s appearance can be somewhat frightening, with a head that resembles that of a human baby. The question is, can Robit actually do everything promised in the campaign? Schwarcz mentions that Robit is essentially a mobile platform that runs apps, and the developers have created apps that cover the common and basic usages: remote control from a smartphone, movement and face detection, dance, and a “find my things” app.

Other, more sophisticated apps, will probably be left for 3rd parties. These will include Robit analyzing foodstuff and determining its nutritional value, launching toy missiles at items around the house using a tiny missile launcher, and keeping watch over your cat so that it doesn’t climb on that precious sofa that used to belong to your mother in law. These are all great ideas, but they still need to be developed by 3rd parties.

This is where the Robit both wins and fails at the same time. The developers realized that no robotic device in the near future is going to be a standalone achievement. They are all going to be connected together, learn from each other and share insights by means of a virtual app market that can be updated every second. When used that way, robots everywhere can evolve much more rapidly. And as Shwarcz says –

“…Our vision [is] that people will help train robots and robots will teach each other! Assuming all Robits are connected to the cloud, one person can teach a Robit to identify, say a can and this information can be shared in the cloud and other Robits can download it and become smarter. We call these bits of data “insights”. An insight can be identifying something, understanding a situation, a proper response to an event or even just an eye and face expression. Robots can teach each other, people will vote for insights and in short time they will simply turn themselves to become more and more intelligent.”

That’s an important vision for the future, and one that I fully agree with. The only problem is that it requires the creation of an app market for a device that is not yet out there on the market and in people’s houses. The iPhone app store was an overnight success because the device reached the hands of millions in the first year to its existence, and probably because it also was an organic continuation of the iTunes brand. At the moment, though, there is no similar app management system for robots, and certainly not enough robots out there to justify the creation of such a system.

At the moment, the Robit crowdfunding campaign is progressing slowly. I hope that Robit makes it through, since it’s an innovative idea for a house robot, and definitely has potential. Whether it succeeds or fails, the campaign mainly shows that the house robots concept is one that innovators worldwide are rapidly becoming attached to, and are trying to find the best ways to implement. In twenty years from now, we’ll laugh about all the whacky ideas these innovators had, but the best of those ideas – those that survived the test of time and market – will serve us in our houses. Seen from that aspect, Shwarcz is one of those countless unsung heroes: the ones who try to make a change in a market that nobody understands, and dare greatly.

Will he succeed? That’s for the future to decide.

 

 

Images of Israeli War Machines from 2048

Do you want to know what war would look like in 2048? The Israeli artist Pavel Postovit has drawn a series of remarkable images depicting soldiers, robots and mechs – all in the service of the Israeli army in 2048. He even drew aerial ships resembling the infamous Triskelion from The Avengers (which had an unfortunate tendency to crash every second week or so).

Pavel is not the first artist to make an attempt to envision the future of war. Jakub Rozalski before him tried to reimagine World War II with robots, and Simon Stalenhag has many drawings that demonstrate what warfare could look like in the future. Their drawings, obviously, are a way to forecast possible futures and bring them to our attention.

Pavel’s drawings may not based on rigorous foresight research, but they don’t have to be. They are mainly focused on showing us one way the future may be unfurled. Pavel himself does not pretend to be a futures researcher, and told me that –

“I was influenced by all kind of different things – Elysium, District 9 [both are sci-fi movies from the last few years], and from my military service. I was in field intelligence, on the border with Syria, and was constantly exposed to all kinds of weapons, both ours and the Syrians.”

Here are a couple of drawings to make you understand Pavel’s vision of the future, divided according to categories I added. Be aware that the last picture is the most haunting of all.

 

Mechs in the Battlefield

Mechs are a form of ground vehicles with legs – much like Boston Dymanic’s Alpha Dog, which they are presumbaly based on. The most innovative of those mechs is the DreamCatcher – a unit with arms and hands that is used to collect “biological intelligence in hostile territory”. In one particularly disturbing image we can see why it’s called “DreamCatcher”, as the mech beheads a deceased human fighter and takes the head for inspection.

b93e7f27692961.5636946fc1475.jpg

Apparently, mechs in Pavel’s future are working almost autonomously – they can reach hostile areas on the battlefield and carry out complicated tasks on their own.

 

Soldiers and Aerial Drones

Soldiers in the field will be companied by aerial drones. Some of the drones will be larger than others – the Tinkerbell, for example, can serve both for recon and personal CAS (Close Air Support) for the individual soldier.

97d79927684283.5636910467ed2.jpg

Other aerial drones will be much smaller, and will be deployed as a swarm. The Blackmoth, for example, is a swarm of stealthy micro-UAVs used to gather tactical intelligence on the battlefield.

f4bb2a27684283.5636947973985.jpg

 

Technology vs. Simplicity

Throughout Pavel’s visions of the future we can see a repeated pattern: the technological prowess of the west is going to collide with the simple lifestyle of natives. Since the images depict the Israeli army, it’s obvious why the machines are essentially fighting or constraining the Palestinians. You can see in the images below what life might look like in 2048 for Arab civillians and combatants.

471c3e27692961.56369472000a8.jpg

Another interesting picture shows Arab combatants dealing with a heavily armed combat mech by trying to make it lose its balance. At the same time, one of the combatants is sitting to the side with a laptop – presumbaly trying to hack into the robot.

431d1327692961.5636946fd2add.jpg

 

The Last Image

If the images above have made you feel somewhat shaken, don’t worry – it’s perfectly normal. You’re seeing here a new kind of warfare, in which robots take extremely active parts against human beings. That’s war for you: brutal and horrible, and there’s nothing much to do against that. If robots can actually minimize the amount of suffering on the battlefield by replacing soldiers, and by carrying out tasks with minimal casualties for both sides – it might actually be better than the human-based model of war.

Perhaps that is why I find the last picture the most horrendous one. You can see in it a combatant, presumably an Arab, with a bloody machette next to him and two prisoners that he’s holding in a cage. The combatant is reading a James Bond book. The symbolism is clear: this is the new kind of terrorist / combatant. He is vicious, ruthless, and well-educated in Western culture – at least well enough to develop his own ideas for using technology to carry out his ideology. In other words, this is an ISIS combatant, who begin to employ some of the technologies of the West like aerial drones, without adhering to moral theories that restrict their use by nations.

ba9a0c31030769.563dbe5189ce8.jpg

 

Conclusion

The future of warfare in Pavel’s vision is beginning to leave the paradigm of human-on-human action, and is rapidly moving into robotic warfare. It is very difficult to think of a military future that does not include robots in it, and obviously we should start thinking right now about the consequences, and how (and whether) we can imbue robots with sufficient autonomous capabilities to carry out missions on their own, while still minimizing casualties on the enemy side.

You can check out the rest of Pavel’s (highly recommended) drawings in THIS LINK.

Four Robot Myths it’s Time We Let Go of

A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.

As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!

12489892_10206643949390298_612958140_o.jpg
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.

 

First Myth: Robots must be shaped as Humanoids

Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.

Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.

telepresence_options_robots.png
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.

 

Second Myth: Robots have a Computer for a Brain

This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.

The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.

Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.

 

This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.

 

Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.

 

Third Myth: Robots as Individual Units

This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.

Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.

The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –

“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”

tesla-model-x-elon-musk.jpg
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.

Fourth Myth: Robots can’t make Moral Decisions

In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.

Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?

These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.

And so, robots must make moral decisions.

 

Conclusion

As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?

This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?

That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!

 

Kitchen of the Future Coming to Your House Soon – Or Only to the Rich?

 

You’re watching MasterChef on TV. The contestants are making their very best dishes and bring them to the judges for tasting. As the judges’ eyes roll back with pleasure, you are left sitting on your couch with your mouth watering at the praises they heap upon the tasty treats.

Well, it doesn’t have to be that way anymore. Meet Moley, the first robotic cook that might actually reach yours household.

Moley is composed mostly of two highly versatile robotic arms that repeat human motions in the kitchen. The arms can basically do anything that a human being can, and in fact receive their ‘training’ by recording highly esteemed chefs at their work. According to the company behind Moley, the robot will come equipped with more than 2,000 digital recipes installed, and will be able to enact each and every one of them with ease.

I could go on describing Moley, but a picture is worth a thousand words, and a video clip is worth around thirty thousand words a second. So take a minute of your time to watch Moley in action. You won’t regret it.

 

 

Moley is projected to get to market in 2017, and should cost around $15,000.

What impact could it have for the future? Here are a few thoughts.

 

Impact on Professional Chefs

Moley is not a chef. It is incapable of thinking up of new dishes on its own. In fact, it is not much more than a ‘monkey’ replicating every movement of the original chef. This description, however, pretty much applies to 99 percent of kitchen workers in restaurants. They spend their work hours doing exactly as the chef tells them to. As a result, they produce dishes that should be close to identical to each other.

As Moley and similar robotic kitchen assistants come into use, we will see a reduced need for cooks and kitchen workers in many restaurants. This trend will be particularly noticeable in large junk food networks like McDonald’s that have the funds to install a similar system in every branch of the network, thereby cutting their costs. And the kitchen workers in those places? Most of them will not be needed anymore.

Professional chefs, though, stand to gain a lot from Moley. In a way, food design could become very similar to creating apps for smartphones. Apps are so hugely successful because everybody has an end device – the smartphone – and can download an app immediately for a small cost. Similarly, when many kitchens make use of Moley, professional chefs can make lots of money by selling new and innovative digital recipes for just one dollar each.

 

sushi-373587_1920
Sushi for all? That is one app I can’t wait for.

 

Are We Becoming a Plutonomy?

In 2005, Citigroup sent a memo to its wealthiest clients, suggesting that the United States is rapidly turning into a plutonomy: a nation in which the wealthy and the prosperous are driving the economy, while everybody else pretty much tags along. In the words of the report –

“There is no such thing as “The U.S. Consumer” or “UK Consumer”, but rich and poor consumers in these countries… The rich are getting richer; they dominate spending. Their trend of getting richer looks unlikely to end anytime soon.”

There is much evidence to support Citigroup’s analysis, and Boston Consulting Group has reached similar conclusions when forecasting the increase in financial wealth of the super-rich in the near future. In short, it would seem that the rich keep getting richer, whereas the rest of us are not enjoying anywhere near the same pace of financial growth. It is therefore hardly surprising to find out that one of the top advices given by Citigroup in its Plutonomy Memo was basically to invest in companies and firms that provide services to the rich and the wealthy. After all, they’re the ones whose wealth keeps on increasing as time moves on. Why should companies cater to the poor and the downtrodden, when they can focus on huge gains from the top 10 percent of the population?

Moley could easily be a demonstration for a service that befits a plutonomy. At $15,000 per robot, Moley could find its place in every millionaire’s house. At the same time, it could kick out of employment many of the low-level, low-earning cooks in kitchens worldwide.

You might say, of course, that those low-level cooks would be able to compete in the new app market as well, and offer their own creations to the public. You would be correct, but consider that any digital market becomes a “winner takes all” market. There is simply no place for plenty of big winners in the app – or digital recipe – market.

Moley, then, is essentially another invention driving us closer to plutonomy.

 

And yet…

New technologies have always cost some people their livelihood, while helping many others. Matt Ridley, in his masterpiece The Rational Optimist, describes how the guilds fought relentlessly against the industrial revolution in England, even though that revolution led in a relatively short period of time to a betterment of the human condition in England. Some people lost their workplace as a result of the industrial revolution, but they found new jobs. In the meantime, everybody suddenly enjoyed from better and cheaper clothes, better products in the stores, and an overall improvement in the economy since England could export its surplus of products.

Moley and similar robots will almost certainly cost some people their workplaces, but in the meantime it has the potential to minimize the cost of food, minimize time spent on making food in the household (I’m spending 45-60 minutes every day making food for my family and me), and elevate the lifestyle quality of the general public – but only if the technology drops in price and can be deployed in many venues, including personal homes.

 

Conclusion

If it’s a forecast you want, then here it is. While we can’t know for sure whether Moley itself will conquer the market, or some other robotic company, it seems likely that as AI continues to develop and drop in prices, robots will become part of many households. I believe that the drop in prices would be significant over a period of twenty years so that almost everybody will enjoy the presence of kitchen robots in their homes.

That said, the pricing and services are not a matter of technological prowess alone, but also a social one: will the robotic companies focus on the wealthy and the rich, or will they find financial models with which to provide services for the poor as well?

This decision could shape our future as we know it, and define whether we’ll keep our headlong dive towards plutonomy.