We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.
Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.
Shady the drone. Source: Department of Defense
Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.
Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.
Or maybe they did.
Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.
Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.
But it didn’t.
Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.
The Rocky Mountains. Shady crossed them too.
And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.
What Does It All Mean?
Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.
Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.
A Predator drone firing away.
I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.
Robots and Urges
I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.
In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?
Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.
Scared? Take Action!
If this revelation scares you as well, you may want to sign the open letter that the Future of Life Institute released around a year and a half ago, against the use of autonomous weapons in war. You won’t be alone out there: more than a thousand AI researchers have already signed that letter.
Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.
And when that happens, you’ll finally know what being a futurist feels like.
OK, so I know the headline to this post isn’t really the sort a stable and serious scientist, or even a futurist, should be asking. But I was asked this question in Quora, and thought it warranted some thought. So here’s my answer to this mystery that had hounded movie directors for the last century or so!
If Japan actually managed to create the huge robots / exoskeletons so favored in the anime genre, all the generals in all the opposing armies would stand up and clap wildly for them. Because these robots are practically the worst war-machines ever. And believe it or not, I know that because we conducted an actual research into this area, together with Dr. Aharon Hauptman and Dr. Liran Antebi,
But before I tell you about that research, let me say a few words about the woes of huge humanoid robots.
First, there are already some highly sophisticated exoskeleton suits developed by major military contractors like Raytheon’s XOS2 and Lockheed Martin’s HULC. While they’re definitely the coolest thing since sliced bread and frosted donuts, they have one huge disadvantage: they need plenty of energy to work. As long as you can connect them to a powerline, it shouldn’t be too much of an issue. But once you ask them to go out to the battlefield… well, after one hour at most they’ll stop working, and quite likely trap the human operating them.
Some companies, like Boston Dynamics, have tried to overcome the energy challenge by adding a diesel engine to their robots. Which is great, except for the fact that it’s still pretty cumbersome, and extremely noisy. Not much use for robots that are supposed to accompany marines on stealth missions.
Robots: Left – Raytheon’s XOS2 exoskeleton suit; Upper right – Lockheed Martin’s HULC; Bottom right – Boston Dynamics’ Alpha Dog.
But who wants stealthy robots, anyway? We’re talking about gargantuan robots, right?!
Well, here’s the thing: the larger and heavier the robot is, the more energy you need to operate it. That means you can’t really add much armor to it. And the larger you make it, the more unwieldy it becomes. There’s a reason elephants are so sturdy, with thick legs – that’s the only way they can support their enormous body weight. Huge robots, which are much heavier than elephants, can’t even have legs with joints. When the MK. II Mech was exposed at Maker Faire 2015, it reached a height of 15 feet, weighed around 6 tons… and could only move by crawling on a caterpillar track. So, in short, it was a tank.
And don’t even think about it rising to the air. Seriously. Just don’t.
Megabots’ MK. II Mech, complete with the quiessential sexy pilot.
But let’s say you manage to somehow bypass all of those pesky energy constraints. Even in that case, huge humanoid robots would not be a good idea because of two main reasons: shape, and size.
Let’s start with shape. The human body had evolved the way it is – limbs, groin, hair and all – to cope with the hardships of life on the one hand, while also being able to have sex, give birth and generally doing fun stuff. But robots aren’t supposed to be doing fun stuff. Unless, that is, you want to build a huge Japanese humanoid sex robot. And yes, I know that sounds perfectly logical for some horribly unfathomable reason, but that’s not what the question is about.
So – if you want a battle-robot, you just don’t need things like legs, a groin, or even a head with a vulnerable computer-brain. You don’t need a huge multifunctional battle-robot. Instead, you want small and efficient robots that are uniquely suited to the task set for them. If you want to drop bombs, use a bomber drone. If you want to kill someone, use a simple robot with a gun. Heck, it can look like a child’s toy, or like a ball, but what does it matter? It just needs to get the job done!
You don’t need a gargantuan Japanese robot for battle. You can even use robots as small as General Robotics’ Dogo: basically a small tank the size of your foot, that carries a glock pistol and can use it efficiently.
Last but not least, large humanoid robots are not only inefficient, cumbersome and impractical, but are also extremely vulnerable to being hit. One solid hit to the head will take them out. Or to a leg. Or the torso. Or the groin of that gargantuan Japanese sex-bot that’s still wondering why it was sent to a battlefield where real tanks are doing all the work. That’s why armies around the world are trying to figure out how to use swarms of drones instead of deploying one large robot: if one drone takes the hit, the rest of the swarm still survives.
So now that I’ve thrown cold ice water on the idea of large Japanese humanoid robots, here’s the final rub. A few years ago I was part of a research along with Dr. Aharon Hauptman and Dr. Liran Antebi, that was meant to assess the capabilities that robots will possess in the next twenty years. I’ll cut straight to the chase: the experts we interviewed and surveyed believed that in twenty years or less we’ll have –
Robots with perfect camouflage capabilities in visible light (essentially invisibility);
Robots that can heal themselves, or use objects from the environment as replacement parts;
Biological robots.
One of the only categories about which the experts were skeptical was that of “transforming platforms” – i.e. robots that can change shape to adapt themselves to different tasks. There is just no need for these highly-versatile (and expensive, inefficient and vulnerable) robots, when you can send ten other highly-specialized robots to perform each task at a turn. Large humanoid robots are the same. There’s just no need for them in warfare.
So, to sum things up: if Japan were to construct anime-style Gundam-like robots and send them to war, I really hope they prepare them for having sex, because they would be screwed over pretty horribly.
I’ve done a lot of writing and research recently about the bright future of AI: that it’ll be able to analyze human emotions, understand social nuances, conduct medical treatments and diagnoses that overshadow the best human physicians, and in general make many human workers redundant and unnecessary.
I still stand behind all of these forecasts, but they are meant for the long term – twenty or thirty years into the future. And so, the question that many people want answered is about the situation at the present. Right here, right now. Luckily, DARPA has decided to provide an answer to that question.
DARPA is one of the most interesting US agencies. It’s dedicated to funding ‘crazy’ projects – ideas that are completely outside the accepted norms and paradigms. It should could as no surprise that DARPA contributed to the establishment of the early internet and the Global Positioning System (GPS), as well as a flurry of other bizarre concepts, such as legged robots, prediction markets, and even self-assembling work tools. Ever since DARPA was first founded, it focused on moonshots and breakthrough initiatives, so it should come as no surprise that it’s also focusing on AI at the moment.
Recently, DARPA’s Information Innovation Office has released a new Youtube clip explaining the state of the art of AI, outlining its capabilities in the present – and considering what it could do in the future. The online magazine Motherboard has described the clip as “Targeting [the] AI hype”, and as being a “necessary viewing”. It’s 16 minutes long, but I’ve condensed its core messages – and my thoughts about them – in this post.
The Three Waves of AI
DARPA distinguishes between three different waves of AI, each with its own capabilities and limitations. Out of the three, the third one is obviously the most exciting, but to understand it properly we’ll need to go through the other two first.
First AI Wave: Handcrafted Knowledge
In the first wave of AI, experts devised algorithms and software according to the knowledge that they themselves possessed, and tried to provide these programs with logical rules that were deciphered and consolidated throughout human history. This approach led to the creation of chess-playing computers, and of deliveries optimization software. Most of the software we’re using today is based on AI of this kind: our Windows operating system, our smartphone apps, and even the traffic lights that allow people to cross the street when they press a button.
Modria is a good example for the way this AI works. Modria was hired in recent years by the Dutch government, to develop an automated tool that will help couples get a divorce with minimal involvement from lawyers. Modria, which specializes in the creation of smart justice systems, took the job and devised an automated system that relies on the knowledge of lawyers and divorce experts.
On Modria’s platform, couples that want to divorce are being asked a series of questions. These could include questions about each parent’s preferences regarding child custody, property distribution and other common issues. After the couple answers the questions, the systems automatically identifies the topics about which they agree or disagree, and tries to direct the discussions and negotiations to reach the optimal outcome for both.
First wave AI systems are usually based on clear and logical rules. The systems examine the most important parameters in every situation they need to solve, and reach a conclusion about the most appropriate action to take in each case. The parameters for each type of situation are identified in advance by human experts. As a result, first wave systems find it difficult to tackle new kinds of situations. They also have a hard time abstracting – taking knowledge and insights derived from certain situations, and applying them to new problems.
To sum it up, first wave AI systems are capable of implementing simple logical rules for well-defined problems, but are incapable of learning, and have a hard time dealing with uncertainty.
Now, some of you readers may at this point shrug and say that this is not artificial intelligence as most people think of. The thing is, our definitions of AI have evolved over the years. If I were to ask a person on the street, thirty years ago, whether Google Maps is an AI software, he wouldn’t have hesitated in his reply: of course it is AI! Google Maps can plan an optimal course to get you to your destination, and even explain in clear speech where you should turn to at each and every junction. And yet, many today see Google Maps’ capabilities as elementary, and require AI to perform much more than that: AI should also take control over the car on the road, develop a conscientious philosophy that will take the passenger’s desires into consideration, and make coffee at the same time.
Well, it turns out that even ‘primitive’ software like Modria’s justice system and Google Maps are fine examples for AI. And indeed, first wave AI systems are being utilized everywhere today.
Second AI Wave: Statistical Learning
In the year 2004, DARPA has opened its first Grand Challenge. Fifteen autonomous vehicles competed at completing a 150 mile course in the Mojave desert. The vehicles relied on first wave AI – i.e. a rule-based AI – and immediately proved just how limited this AI actually is. Every picture taken by the vehicle’s camera, after all, is a new sort of situation that the AI has to deal with!
To say that the vehicles had a hard time handling the course would be an understatement. They could not distinguish between different dark shapes in images, and couldn’t figure out whether it’s a rock, a far-away object, or just a cloud obscuring the sun. As the Grand Challenge deputy program manager had said, some vehicles – “were scared of their own shadow, hallucinating obstacles when they weren’t there.”
The sad result of the first DARPA Grand Challenge
None of the groups managed to complete the entire course, and even the most successful vehicle only got as far as 7.4 miles into the race. It was a complete and utter failure – exactly the kind of research that DARPA loves funding, in the hope that the insights and lessons derived from these early experiments would lead to the creation of more sophisticated systems in the future.
And that is exactly how things went.
One year later, when DARPA opened Grand Challenge 2005, five groups successfully made it to the end of the track. Those groups relied on the second wave of AI: statistical learning. The head of one of the winning groups was immediately snatched up by Google, by the way, and set in charge of developing Google’s autonomous car.
In second wave AI systems, the engineers and programmers don’t bother with teaching precise and exact rules for the systems to follow. Instead, they develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient.
Statistical learning systems are highly successful at understanding the world around them: they can distinguish between two different people or between different vowels. They can learn and adapt themselves to different situations if they’re properly trained. However, unlike first wave systems, they’re limited in their logical capacity: they don’t rely on precise rules, but instead they go for the solutions that “work well enough, usually”.
The poster boy of second wave systems is the concept of artificial neural networks. In artificial neural networks, the data goes through computational layers, each of which processes the data in a different way and transmits it to the next level. By training each of these layers, as well as the complete network, they can be shaped into producing the most accurate results. Oftentimes, the training requires the networks to analyze tens of thousands of data sources to reach even a tiny improvement. But generally speaking, this method provides better results than those achieved by first wave systems in certain fields.
So far, second wave systems have managed to outdo humans at face recognition, at speech transcription, and at identifying animals and objects in pictures. They’re making great leaps forward in translation, and if that’s not enough – they’re starting to control autonomous cars and aerial drones. The success of these systems at such complex tasks leave AI experts aghast, and for a very good reason: we’re not yet quite sure why they actually work.
The Achilles heel of second wave systems is that nobody is certain why they’re working so well. We see artificial neural networks succeed in doing the tasks they’re given, but we don’t understand how they do so. Furthermore, it’s not clear that there actually is a methodology – some kind of a reliance on ground rules – behind artificial neural networks. In some aspects they are indeed much like our brains: we can throw a ball to the air and predict where it’s going to fall, even without calculating Newton’s equations of motion, or even being aware of their existence.
This may not sound like much of a problem at first glance. After all, artificial neural networks seem to be working “well enough”. But Microsoft may not agree with that assessment. The firm has released a bot to social media last year, in an attempt to emulate human writing and make light conversation with youths. The bot, christened as “Tai”, was supposed to replicate the speech patterns of a 19 years old American female youth, and talk with the teenagers in their unique slang. Microsoft figured the youths would love that – and indeed they have. Many of them began pranking Tai: they told her of Hitler and his great success, revealed to her that the 9/11 terror attack was an inside job, and explained in no uncertain terms that immigrants are the ban of the great American nation. And so, a few hours later, Tai began applying her newfound knowledge, claiming live on Twitter that Hitler was a fine guy altogether, and really did nothing wrong.
That was the point when Microsoft’s engineers took Tai down. Her last tweet was that she’s taking a time-out to mull things over. As far as we know, she’s still mulling.
This episode exposed the causality challenge which AI engineers are currently facing. We could predict fairly well how first wave systems would function under certain conditions. But with second wave systems we can no longer easily identify the causality of the system – the exact way in which input is translated into output, and data is used to reach a decision.
All this does not say that artificial neural networks and other second wave AI systems are useless. Far from that. But it’s clear that if we don’t want our AI systems to get all excited about the Nazi dictator, some improvements are in order. We must move on to the next and third wave of AI systems.
Third AI Wave: Contextual Adaptation
In the third wave, the AI systems themselves will construct models that will explain how the world works. In other words, they’ll discover by themselves the logical rules which shape their decision-making process.
Here’s an example. Let’s say that a second wave AI system analyzes the picture below, and decides that it is a cow. How does it explain its conclusion? Quite simply – it doesn’t.
There’s a 87% chance that this is a picture of a cow. Source: Wikipedia
Second wave AI systems can’t really explain their decisions – just as a kid could not have written down Newton’s motion equations just by looking at the movement of a ball through the air. At most, second wave systems could tell us that there is a “87% chance of this being the picture of a cow”.
Third wave AI systems should be able to add some substance to the final conclusion. When a third wave system will ascertain the same picture, it will probably say that since there is a four-legged object in there, there’s a higher chance of this being an animal. And since its surface is white splotched with black, it’s even more likely that this is a cow (or a Dalmatian dog). Since the animal also has udders and hooves, it’s almost certainly a cow. That, assumedly, is what a third wave AI system would say.
Third wave systems will be able to rely on several different statistical models, to reach a more complete understanding of the world. They’ll be able to train themselves – just as Alpha-Go did when it played a million Go games against itself, to identify the commonsense rules it should use. Third wave systems would also be able to take information from several different sources to reach a nuanced and well-explained conclusion. These systems could, for example, extract data from several of our wearable devices, from our smart home, from our car and the city in which we live, and determine our state of health. They’ll even be able to program themselves, and potentially develop abstract thinking.
The only problem is that, as the director of DARPA’s Information Innovation Office says himself, “there’s a whole lot of work to be done to be able to build these systems.”
And this, as far as the DARPA clip is concerned, is the state of the art of AI systems in the past, present and future.
What It All Means
DARPA’s clip does indeed explain the differences between different AI systems, but it does little to assuage the fears of those who urge us to exercise caution in developing AI engines. DARPA does make clear that we’re not even close to developing a ‘Terminator’ AI, but that was never the issue in the first place. Nobody is trying to claim that AI today is sophisticated enough to do all the things it’s supposed to do in a few decades: have a motivation of its own, make moral decisions, and even develop the next generation of AI.
But the fulfillment of the third wave is certainly a major step in that direction.
When third wave AI systems will be able to decipher new models that will improve their function, all on their own, they’ll essentially be able to program new generations of software. When they’ll understand context and the consequences of their actions, they’ll be able to replace most human workers, and possibly all of them. And why they’ll be allowed to reshape the models via which they appraise the world, then they’ll actually be able to reprogram their own motivation.
All of the above won’t happen in the next few years, and certainly won’t come to be achieved in full in the next twenty years. As I explained, no serious AI researcher claims otherwise. The core message by researchers and visionaries who are concerned about the future of AI – people like Steven Hawking, Nick Bostrom, Elon Musk and others – is that we need to start asking right now how to control these third wave AI systems, of the kind that’ll become ubiquitous twenty years from now. When we consider the capabilities of these AI systems, this message does not seem far-fetched.
The Last Wave
The most interesting question for me, which DARPA does not seem to delve on, is what the fourth wave of AI systems would look like. Would it rely on an accurate emulation of the human brain? Or maybe fourth wave systems would exhibit decision making mechanisms that we are incapable of understanding as yet – and which will be developed by the third wave systems?
These questions are left open for us to ponder, to examine and to research.
That’s our task as human beings, at least until third wave systems will go on to do that too.
“Hey, wake up! You’ve got to see something amazing!” I gently wake up my four years old son.
He opens his eyes and mouth in a yawn. “Is it Transformers?” He asks hopefully.
“Even better!” I promise him. “Come outside to the porch with me and you’ll see for yourself!”
He dashes outside with me. Out in the street, Providence’s garbage truck is taking care of the trash bins in a completely robotic fashion. Here’s the evidence I shot it so you can see for yourself. –
The kid glares at me. “That’s not a Transformer.” He says.
“It’s a vehicle with a robotic arm that grabs the trash bins, lifts them up in the air and empties them into the truck.” I argue. “And then it even returns the bins to their proper place. And you really should take note of this, kiddo, because every detail in this scene provides hints about the way you’ll work in the future, and how the job market will look like.”
“What’s a job?” He asks.
I choose to ignore that. “Here are the most important points. First, routine tasks become automated. Routine tasks are those that need to be repeated without too much of a variation in between, and can therefore be easily handled by machines. In fact, that’s what the industrial revolution was all about – machines doing human menial labor more efficiently than human workers on a massive scale. But in last few decades machines have shown themselves capable of taking more and more routine tasks on themselves. And very soon we’ll see tasks that have been considered non-routine in the past, like controlling a car, being relegated to robots. So if you want to have a job in the future, try to find something that isn’t routine – a job that requires mental agility and finding solutions to new challenges every day.”
He’s decidedly rubbing his eyes, but I’m on the horse now.
“Second, we’ll still need workers, but not as many. Science fiction authors love writing about a future in which nobody will ever need to work, and robots will serve us all. Maybe this future will come to pass, but on the way there we’ll still need human workers to bridge the gap between ancient and novel systems. In the garbage car, for example, the robotic arm replaces two or three workers, but we still need the driver to pilot the vehicle – which is ancient technology – and to deal with unexpected scenarios. Even when the vehicle will be completely autonomous and won’t need a driver, a few workers will still be needed to be on alert: they’ll be called to places where the car has malfunctioned, or where the AI has identified a situation it’s incapable or unauthorized to deal with. So there will still be human workers, just not as many as we have today.”
He opens his mouth for a yawn again, but I cut him short. “Never show them you’re tired! Which brings me to the third point: in the future, we’ll need fewer workers – but of high caliber. Each worker will carry a large burden on his or her shoulders. Take this driver, for example: he needs to stop in the exact spot in front of every bin, operate the robotic arm and make sure nothing gets messy. In the past, the drivers didn’t need to have all that responsibility because the garbage workers who rode in the best of the truck did most of the work. The modern driver also had to learn to operate the new vehicle with the robotic arm, so it’s clear that he is learning and adapting to new technologies. These are skills that you’ll need to learn and acquire for yourself. And when will you learn them?!”
“In the future.” He recites by rote in a toneless voice. “Can I go back to sleep now?”
“Never.” I promise him. “You have to get upgraded – or be left behind. Take a look at those two bins on the pavement. The robotic arm can only pick up one of them – and it’s the one that comes in the right size. The other bin is being left unattended, and has to wait until the primitive human can come and take care of it. In other words, only the upgraded bin receives the efficient and rapid treatment by the garbage truck. So unless you want to stay like that other trash bin way behind, you have to prepare for the future and move along with it – or everyone else will leap ahead of you.”
He nods with drooping lids, and yawns again. I allow him to complete this yawn, at least.
“OK daddy.” He says. “Now can I go back to bed?”
I stare at him for a few more moments, while my mind returns from the future to the present.
“Yes,” I smile sadly at him. “Go back to bed. The future will wait patiently for you to grow up.”
My gaze follows him as he goes back to him room, and the smile melts from my lips. He’s still just four years old, and will learn all the skills that he needs to handle the future world as he grows up.
For him, the future will wait patiently.
For others – like those unneeded garbage workers – it’s already here.
Pepper is one of the most sophisticated household robots in existence today. It has a body shape that reminds one of a prepubescent child, only reaching a height of 120 centimeters, and with a tablet on its chest. It constantly analyzes its owner’s emotions according to their speech, facial expressions and gestures, and responds accordingly. It also learns – for example, by analyzing which modes of behavior it can enact in order to make its owner feel better. It can even use its hands to hug people.
No wonder that when the first 1,000 Pepper units were offered for sale in Japan for $1,600, they were all sold in one minute. Pepper is now the most famous household robot in the world.
Pepper is probably also the only robot you’re not allowed to have sex with.
According to the contract, written in Japanese legal speak and translated to English, users are not allowed to perform –
“(4) Acts for the purpose of sexual or indecent behavior, or for the purpose of associating with unacquainted persons of the opposite sex.”
What does this development mean? Here is the summary, in just three short points.
First Point: Is Pepper Being Used for Surveillance?
First, one has to wonder just how SoftBank, the robot’s distributors in Japan, is going to keep tabs on whether the robot has been sexually used or not. Since Pepper’s price includes a $200 monthly “data and insurance fee”, it’s a safe bet that every Pepper unit is transmitting some of its data back to SoftBank’s servers. That’s not necessarily a bad thing: as I’ve written in Four Robot Myths it’s Time We Let Go of, robots can no longer be seen as individual units. Instead, they are a form of a hive brain, relying on each other’s experience and insights to guide their behavior. In order to do that, they must be connected to the cloud.
This is obviously a form of surveillance. Pepper is sophisticated enough to analyze its owner’s emotions and responses, and can thus deliver a plethora of information to SoftBank, advertisers and even government authorities. The owners could probably activate a privacy mode (if there’s not a privacy mode now, it will almost certainly be added in the near future by common demand), but the rest of the time their behavior will be under close scrutiny. Not necessarily because SoftBank is actually interested in what you’re doing in your houses, but simply because it wants to improve the robots.
And, well, also because it may not want you to have sex with them.
This is where things get bizarre. It is almost certainly the case that if SoftBank wished to, it could set up a sex alarm to blare up autonomously if Pepper is repeatedly exposed to sexual acts. There doesn’t even have to be a human in the loop – just train the AI engine behind Pepper on a large enough number of porn and erotic movies, and pretty soon the robot will be able to tell by itself just what the owner is dangling in front of its cameras.
The rest of the tale is obvious: the robot will complain to SoftBank via the cloud, but will do so without sharing any pictures or videos it’s taken. In other words, it won’t share information but only its insights and understandings of what’s been going on in that house. SoftBank might issue a soft warning to the owner, asking it to act more coyly around Pepper. If such chastity alerts keep coming up, though, SoftBank might have to retrieve Pepper from that house. And almost certainly, it will not allow other Pepper units to learn from the one that has been exposed to sexual acts.
And here’s the rub: if SoftBank wants to keep on developing its robots, they must learn from each other, and thus they must be connected to the cloud. But as long as SoftBank doesn’t want them to learn how to engage in sexual acts, it will have to set some kind of a filter – meaning that the robots will have to learn to recognize sexual acts, and refuse to talk about them with other robots. And silence, in the case of an always-operational robot, is as good as any testimony.
So yes, SoftBank will know when you’re having sex with Pepper.
I’ve written extensively in the past about the meaning of private property being changed, as everything are being connected to the cloud. Tesla are selling you a car, but they still control some parts of it. Google are selling you devices for controlling your smart house – which they then can (and do) shut down from a distance. And yes, SoftBank is selling you a robot which becomes your private property – as long as you don’t do anything with it that SoftBank doesn’t like you to.
And that was only the first point.
Second Point: Is Sex the Answer, or the Question?
There’s been some public outrage recently about sex with robots, with an actual campaign against using robots as sex objects. I sent the leaders of the campaign, Kathleen Richardson and Erik Brilling, several questions to understand the nature of their issues with the robots. They have not answered my questions, but according to their campaign website it seems that they equate ‘robot prostitution’ with human prostitution.
“But robots don’t feel anything.” You might say now. “They don’t have feelings, or dignity of their own. Do they?”
Let’s set things straight: sexual abuse is among the most horrible things any human can do to another. The abuser is causing both temporary and permanent injury to the victim’s body and mind. That’s why we call it an abuse. But if there are no laws to protect a robot’s body, and no mind to speak of, why should we care whether someone uses a robot in a sexual way?
Richardson’s and Brilling basically claim that it doesn’t matter whether the robots are actually experiencing the joys of coitus or suffering the ignominy of prostitution. The mere fact that people will use robots in the shape of children or women for sexual release will serve to perpetuate our current society model in which women and children are being sexually abused.
Let’s approach the issue from another point of view, though. Could sex with robots actually prevent some cases of sexual abuse?
Assuming that robots can provide a high-quality sexual experience to human beings, it seems reasonable that some pent-up sexual tensions can be relieved using sex robots. There are arguments that porn might actually deter sexual violence, and while the debate is nowhere near to conclusion on that point, it’s interesting to ask: if robots can actually relieve human sexual tensions, and thus deter sexual violence against other human beings – should we allow that to happen, even though it objectifies robots, any by association, women and children as well?
I would wait for more data to come in on this subject before I actually advocate for sex with robots, but in the meantime we should probably refrain from making judgement on people who have sex with robots. Who knows? It might actually serve a useful purpose even in the near future. Which brings me to the third point –
Third Point: Don’t You Tell Me Not to have Sex with MY Robot
Brandon Sanderson is one of my favorite fantasy and science fiction authors. He is producing new books in an incredible pace, and his writing quality does not seem to suffer for it. The first book in his recent sci-fi trilogy, Steelheart from The Reckoners series, was published in September 2013. Calamity, the third and last book in the same series was published in February 2016. So just three years passed between the first and the last book in the series.
The books themselves describe a post-apocalyptic future, around ten years away from us. In the first book, the hero lives in the most technologically advanced cities in the world, with electricity, smartphones, and sophisticated technology at his disposal. Sanderson describes sophisticated weapons used by the police forces in the city, including laser weapons and even mechanized war suits. By the third book, our hero reaches another technologically-advanced outpost of humanity, and suddenly is surrounded by weaponized aerial drones.
You may say that the first city chose not to use aerial drones, but that explanation is a bit sketchy, as anyone who has read the books can testify. Instead, it seems to me that in the three years that passed since the original book was published, aerial drones finally made a large enough impact on the general mindset, that Sanderson could no longer ignore them in his vision of a future. He realized that his readers would look askance at any vision of the future that does not include mention of aerial drones of some kind. In effect, the drones have become part of the way we think about the future. We find it difficult to imagine a future without them.
Usually, our visions of the future change relatively slowly and gradually. In the case of the drones, it seems that within three years they’ve moved from an obscure technological item to a common myth the public shares about the future.
Science fiction, then, can show us what people in the present expect the future to look like. And therein lies its downfall.
Where Science Fiction Fails
Science fiction can be used to help us explore alternative futures, and it does so admirably well. However, best-selling books must reach a wide audience, and to resonate with many on several different levels. In order to do that, the most popular science fiction authors cannot stray too far from our current notions. They cannot let go of our natural intuitions and core feelings: love, hate, the appreciation we have for individuality, and many others. They can explore themes in which the anti-hero, or The Enemy, defy these commonalities that we share in the present. However, if the author wants to write a really popular book, he or she will take care not to forego completely the reality we know.
Of course, many science fiction book are meant for ‘in-house’ audience: for the hard-core sci-fi audience who is eager to think beyond the box of the present. Alastair Reynolds in his Revelation Space series, for example, succeeds in writing sci-fi literature for this audience exactly. He’s writing stories that in many aspects transcend notions of individuality, love and humanity. And he’s paying the price for this transgression as his books (to the best of my knowledge) have yet to appear on the New York Times Best Seller list. Why? As one disgruntled reviewer writes about Reynolds’ book Chasm City –
“I prefer reading a story where I root for the protagonist. After about a third of the way in, I was pretty disturbed by the behavior of pretty much everyone.”
Highly popular sci-fi literature is thus forced to never let go completely of present paradigms, which sadly limits its use as a tool to developing and analyzing far-away futures. On the other hand, it’s conceivable that an annual analysis of the most popular sci-fi books could provide us with an understanding of the public state-of-mind regarding the future.
Of course, there are much easier ways to determine how much hype certain technologies receive in the public sphere. It’s likely that by running data mining algorithms on the content of technological blogs and websites, we would reach better conclusions. Such algorithms can also be run practically every hours of every day. So yeah, that’s probably a more efficient route to figuring out how the public views the future of technology.
But if you’re looking for an excuse to read science fiction novels for a purely academic reason, just remember you found it in this blog post.
It all began in a horribly innocent fashion, as such things often do. The Center for Middle East Studies in Brown University, near my home, has held a “public discussion” about the futures of Palestinians in Israel. Naturally, as a Israeli living in the States, I’m still very much interested in this area, so I took a look at the panelist list and discovered immediately they all came from the same background and with the same point of view: Israel was the colonialist oppressor and that was pretty much all there was to it in their view.
Quite frankly, this seemed bizarre to me: how can you have a discussion about the future of a people in a region, without understanding the complexities of their geopolitical situation? How can you talk about the future in a war-torn region like the Middle East, when nobody speaks about security issues, or provides the state of mind of the Israeli citizens or government? In short, how can you have a discussion when all the panelists say exactly the same thing?
So I decided to do something about it, and therein lies my downfall.
I am the proud co-founder of TeleBuddy – a robotics services start-up company that operates telepresence robots worldwide. If you want to reach somewhere far away – Israel, California, or even China – we can place a robot there so that instead of wasting time and health flying, you can just log into the robot and be there immediately. We mainly use Double Robotics‘ robots, and since I had one free for use, I immediately thought we could use the robots to bring a representative of the Israeli point of view to the panel – in a robotic body.
Things began moving in a blur from that point. I obtained permission from Prof. Beshara Doumani, who organized the panel, to bring a robot to the place. StandWithUs – an organization that disseminates information about Israel in the United States – has graciously agreed to send a representative by the name of Shahar Azani to log into the robot, and so it happened that I came to the event with possibly the first ever robotic-diplomat.
Things went very well in the event itself. While my robotic friend was not allowed to speak from the stage, he talked with people in the venue before the event began, and had plenty of fun. Some of the people in the event seemed excited about the robot. Others were reluctant to approach him, so he talked with other people instead. The entire thing was very civil, as other participants in the panel later remarked. I really thought we found a good use for the robot, and even suggested to the organizers that next time they could use TeleBuddy’s robots to ‘teleport’ a different representative – maybe a Palestinian – to their event. I went home happily, feeling I made just a little bit of a difference in the world and contributed to an actual discussion between the two sides in a conflict.
A few days later, Open Hillel published a statement about the event, as follows –
“In a dystopian twist, the latest development in the attack on open discourse by right-wing pro-Israel groups appears to be the use of robots to police academic discourse. At a March 3, 2016 event about Palestinian citizens of Israel sponsored by Middle East Studies at Brown University, a robot attended and accosted students. The robot used an iPad to display a man from StandWithUs, which receives funding from Israel’s government.
…
Before the event began, students say, the robot approached students and harassed them about why they were attending the event. Students declined to engage with this bizarre form of intimidation and ignored the robot. At the event itself, the robot and the StandWithUs affiliate remained in the back. During the question and answer session, the man briefly left the robot’s side to ask a question.
…
It is not yet known whether this was the first use of a robot to monitor Israel-Palestine discourse on campus. … Open Hillel opposes the attempts of groups like StandWithUs to monitor students and faculty. As a student-led grassroots campaign supported by young alumni, professors, and rabbis, Open Hillel rejects any attempt to stifle or target student or faculty activists. The use of robots for purposes of surveillance endangers the ability of students and faculty to learn and discuss this issue. We call upon outside groups such as StandWithUs to conduct themselves in accordance with the academic principles of open discourse and debate.”
I later met accidentally with some of the students who were in the event, and asked them why they believed the robot was used for surveillance, or to harass students. In return, they accused me of being a spy for the Israeli government. Why? Obviously, because I operated a “surveillance drone” on American soil. That’s perfect circular logic.
Lessons
There are lessons aplenty to be obtained from this bizarre incident, but the one that strikes me in particular is that you can’t easily ignore existing cultural sentiments and paradigms without taking a hit in the process. The robot was obviously not a surveillance drone, or meant for surveillance of any kind, but Open Hillel managed to rebrand it by relying on fears that have deep-roots in the American public. They did it to promote their own goals of getting some PR, and they did it so skillfully that I can’t help but applaud them for it. Quite frankly, I wish their PR guys were working for me.
That said, there are issues here that need to be dealt with if telepresence robots ever want to become part of critical discussions. The fear that the robot may be recording or taking pictures in an event is justified – a tech-savvy person controlling the robot could certainly find a way to do that. However, I can’t help but feel that there are less-clever ways to accomplish that, such as using one’s smartphone, or the covert Memoto Lifelogging camera. If you fear being recorded on public, you should know that telepresence robots are probably the least of your concerns.
Conclusions
The honest truth is that this is a brand new field for everyone involved. How should robots behave at conferences? Nobody knows. How should they talk with human beings at panels or public events? Nobody can tell yet. How can we make human beings feel more comfortable when they are in the same perimeter with a suit-wearing robot that can potentially record everything it sees? Nobody has any clue whatsoever.
These issues should be taken into consideration in any venture to involve robots in the public sphere.
It seems to me that we need some kind of a standard, to be developed in a collaboration between ethicists, social scientists and roboticists, which will ensure a high level of data encryption for telepresence robots and an assurance that any data collected by the robot will be deleted on the spot.
We need, in short, to develop proper robotic etiquette.
And if we fail to do that, then it shouldn’t really surprise anyone when telepresence robots are branded as “surveillance drones” used by Zionist spies.
The field of house robots is abuzz in the last two years. It began with Jibo – the first cheap house robot that was originally advertised on Indiegogo and gathered nearly $4 million. Jibo doesn’t look at all like Asimov’s vision of humanoid robots. Instead, it resembles a small cartoon-like version of Eve from the Wall-E movie. Jibo can understand voice commands, recognize and track faces, and even take pictures of family members and speak and interact with them. It can do all that for just $750 – which seems like a reasonable deal for a house robot. Romo is another house robot for just $150 or so, with a cute face and a quirky attitude, which has sadly gone out of production last year.
Pictures of house robots: Pepper (~$1,600), Jibo (~$750), Romo (~$130). Image on the right originally from That’s Really Possible.
Now comes a new contender in the field of house robots: Robit, “The Robot That Gets Things Done”. It moves around the house on its three wheels, wakes you up in the morning, looks after lost items like your shoes or keys on the floor, detects smoke and room temperature, and even delivers beer for you on a tray. And it’s doing all that for just $349 on Indiegogo.
I interviewed Shlomo Schwarcz, co-founder & CEO at Robit Robot, about Robit and the present and future of house robots. Schwarcz emphasized that unlike Jibo, Robit is not supposed to be a ‘social robot’. You’re not supposed to talk with it or have a meaningful relationship with it. Instead, it is your personal servant around the house.
“You choose the app (guard the house, watch your pet, play a game, dance, track objects, find your list keys, etc.) and Robit does it. We believe people want a Robit that can perform useful things around the house rather than just chat.”
It’s an interesting choice, and it seems that other aspects of Robit conform to it. While Jibo and Romo are pleasant to look at, Robit’s appearance can be somewhat frightening, with a head that resembles that of a human baby. The question is, can Robit actually do everything promised in the campaign? Schwarcz mentions that Robit is essentially a mobile platform that runs apps, and the developers have created apps that cover the common and basic usages: remote control from a smartphone, movement and face detection, dance, and a “find my things” app.
Other, more sophisticated apps, will probably be left for 3rd parties. These will include Robit analyzing foodstuff and determining its nutritional value, launching toy missiles at items around the house using a tiny missile launcher, and keeping watch over your cat so that it doesn’t climb on that precious sofa that used to belong to your mother in law. These are all great ideas, but they still need to be developed by 3rd parties.
This is where the Robit both wins and fails at the same time. The developers realized that no robotic device in the near future is going to be a standalone achievement. They are all going to be connected together, learn from each other and share insights by means of a virtual app market that can be updated every second. When used that way, robots everywhere can evolve much more rapidly. And as Shwarcz says –
“…Our vision [is] that people will help train robots and robots will teach each other! Assuming all Robits are connected to the cloud, one person can teach a Robit to identify, say a can and this information can be shared in the cloud and other Robits can download it and become smarter. We call these bits of data “insights”. An insight can be identifying something, understanding a situation, a proper response to an event or even just an eye and face expression. Robots can teach each other, people will vote for insights and in short time they will simply turn themselves to become more and more intelligent.”
That’s an important vision for the future, and one that I fully agree with. The only problem is that it requires the creation of an app market for a device that is not yet out there on the market and in people’s houses. The iPhone app store was an overnight success because the device reached the hands of millions in the first year to its existence, and probably because it also was an organic continuation of the iTunes brand. At the moment, though, there is no similar app management system for robots, and certainly not enough robots out there to justify the creation of such a system.
At the moment, the Robit crowdfunding campaign is progressing slowly. I hope that Robit makes it through, since it’s an innovative idea for a house robot, and definitely has potential. Whether it succeeds or fails, the campaign mainly shows that the house robots concept is one that innovators worldwide are rapidly becoming attached to, and are trying to find the best ways to implement. In twenty years from now, we’ll laugh about all the whacky ideas these innovators had, but the best of those ideas – those that survived the test of time and market – will serve us in our houses. Seen from that aspect, Shwarcz is one of those countless unsung heroes: the ones who try to make a change in a market that nobody understands, and dare greatly.
Do you want to know what war would look like in 2048? The Israeli artist Pavel Postovit has drawn a series of remarkable images depicting soldiers, robots and mechs – all in the service of the Israeli army in 2048. He even drew aerial ships resembling the infamous Triskelion from The Avengers (which had an unfortunate tendency to crash every second week or so).
Pavel is not the first artist to make an attempt to envision the future of war. Jakub Rozalski before him tried to reimagine World War II with robots, and Simon Stalenhag has many drawings that demonstrate what warfare could look like in the future. Their drawings, obviously, are a way to forecast possible futures and bring them to our attention.
Pavel’s drawings may not based on rigorous foresight research, but they don’t have to be. They are mainly focused on showing us one way the future may be unfurled. Pavel himself does not pretend to be a futures researcher, and told me that –
“I was influenced by all kind of different things – Elysium, District 9 [both are sci-fi movies from the last few years], and from my military service. I was in field intelligence, on the border with Syria, and was constantly exposed to all kinds of weapons, both ours and the Syrians.”
Here are a couple of drawings to make you understand Pavel’s vision of the future, divided according to categories I added. Be aware that the last picture is the most haunting of all.
Mechs in the Battlefield
Mechs are a form of ground vehicles with legs – much like Boston Dymanic’s Alpha Dog, which they are presumbaly based on. The most innovative of those mechs is the DreamCatcher – a unit with arms and hands that is used to collect “biological intelligence in hostile territory”. In one particularly disturbing image we can see why it’s called “DreamCatcher”, as the mech beheads a deceased human fighter and takes the head for inspection.
Apparently, mechs in Pavel’s future are working almost autonomously – they can reach hostile areas on the battlefield and carry out complicated tasks on their own.
Soldiers and Aerial Drones
Soldiers in the field will be companied by aerial drones. Some of the drones will be larger than others – the Tinkerbell, for example, can serve both for recon and personal CAS (Close Air Support) for the individual soldier.
Other aerial drones will be much smaller, and will be deployed as a swarm. The Blackmoth, for example, is a swarm of stealthy micro-UAVs used to gather tactical intelligence on the battlefield.
Technology vs. Simplicity
Throughout Pavel’s visions of the future we can see a repeated pattern: the technological prowess of the west is going to collide with the simple lifestyle of natives. Since the images depict the Israeli army, it’s obvious why the machines are essentially fighting or constraining the Palestinians. You can see in the images below what life might look like in 2048 for Arab civillians and combatants.
Another interesting picture shows Arab combatants dealing with a heavily armed combat mech by trying to make it lose its balance. At the same time, one of the combatants is sitting to the side with a laptop – presumbaly trying to hack into the robot.
The Last Image
If the images above have made you feel somewhat shaken, don’t worry – it’s perfectly normal. You’re seeing here a new kind of warfare, in which robots take extremely active parts against human beings. That’s war for you: brutal and horrible, and there’s nothing much to do against that. If robots can actually minimize the amount of suffering on the battlefield by replacing soldiers, and by carrying out tasks with minimal casualties for both sides – it might actually be better than the human-based model of war.
Perhaps that is why I find the last picture the most horrendous one. You can see in it a combatant, presumably an Arab, with a bloody machette next to him and two prisoners that he’s holding in a cage. The combatant is reading a James Bond book. The symbolism is clear: this is the new kind of terrorist / combatant. He is vicious, ruthless, and well-educated in Western culture – at least well enough to develop his own ideas for using technology to carry out his ideology. In other words, this is an ISIS combatant, who begin to employ some of the technologies of the West like aerial drones, without adhering to moral theories that restrict their use by nations.
Conclusion
The future of warfare in Pavel’s vision is beginning to leave the paradigm of human-on-human action, and is rapidly moving into robotic warfare. It is very difficult to think of a military future that does not include robots in it, and obviously we should start thinking right now about the consequences, and how (and whether) we can imbue robots with sufficient autonomous capabilities to carry out missions on their own, while still minimizing casualties on the enemy side.
You can check out the rest of Pavel’s (highly recommended) drawings in THIS LINK.
A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.
As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.
First Myth: Robots must be shaped as Humanoids
Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.
Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.
Second Myth: Robots have a Computer for a Brain
This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.
The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.
Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.
This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.
Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.
Third Myth: Robots as Individual Units
This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.
Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.
The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –
“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.
Fourth Myth: Robots can’t make Moral Decisions
In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.
Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?
These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.
And so, robots must make moral decisions.
Conclusion
As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?
This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?
That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!