Garbage, Trash, and the Future of Jobs

“Hey, wake up! You’ve got to see something amazing!” I gently wake up my four years old son.

He opens his eyes and mouth in a yawn. “Is it Transformers?” He asks hopefully.

“Even better!” I promise him. “Come outside to the porch with me and you’ll see for yourself!”

He dashes outside with me. Out in the street, Providence’s garbage truck is taking care of the trash bins in a completely robotic fashion. Here’s the evidence I shot it so you can see for yourself. –

 

The kid glares at me. “That’s not a Transformer.” He says.

“It’s a vehicle with a robotic arm that grabs the trash bins, lifts them up in the air and empties them into the truck.” I argue. “And then it even returns the bins to their proper place. And you really should take note of this, kiddo, because every detail in this scene provides hints about the way you’ll work in the future, and how the job market will look like.”

“What’s a job?” He asks.

I choose to ignore that. “Here are the most important points. First, routine tasks become automated. Routine tasks are those that need to be repeated without too much of a variation in between, and can therefore be easily handled by machines. In fact, that’s what the industrial revolution was all about – machines doing human menial labor more efficiently than human workers on a massive scale. But in last few decades machines have shown themselves capable of taking more and more routine tasks on themselves. And very soon we’ll see tasks that have been considered non-routine in the past, like controlling a car, being relegated to robots. So if you want to have a job in the future, try to find something that isn’t routine – a job that requires mental agility and finding solutions to new challenges every day.”

He’s decidedly rubbing his eyes, but I’m on the horse now.

“Second, we’ll still need workers, but not as many. Science fiction authors love writing about a future in which nobody will ever need to work, and robots will serve us all. Maybe this future will come to pass, but on the way there we’ll still need human workers to bridge the gap between ancient and novel systems. In the garbage car, for example, the robotic arm replaces two or three workers, but we still need the driver to pilot the vehicle – which is ancient technology – and to deal with unexpected scenarios. Even when the vehicle will be completely autonomous and won’t need a driver, a few workers will still be needed to be on alert: they’ll be called to places where the car has malfunctioned, or where the AI has identified a situation it’s incapable or unauthorized to deal with. So there will still be human workers, just not as many as we have today.”

He opens his mouth for a yawn again, but I cut him short. “Never show them you’re tired! Which brings me to the third point: in the future, we’ll need fewer workers – but of high caliber. Each worker will carry a large burden on his or her shoulders. Take this driver, for example: he needs to stop in the exact spot in front of every bin, operate the robotic arm and make sure nothing gets messy. In the past, the drivers didn’t need to have all that responsibility because the garbage workers who rode in the best of the truck did most of the work. The modern driver also had to learn to operate the new vehicle with the robotic arm, so it’s clear that he is learning and adapting to new technologies. These are skills that you’ll need to learn and acquire for yourself. And when will you learn them?!”

“In the future.” He recites by rote in a toneless voice. “Can I go back to sleep now?”

“Never.” I promise him. “You have to get upgraded – or be left behind. Take a look at those two bins on the pavement. The robotic arm can only pick up one of them – and it’s the one that comes in the right size. The other bin is being left unattended, and has to wait until the primitive human can come and take care of it. In other words, only the upgraded bin receives the efficient and rapid treatment by the garbage truck. So unless you want to stay like that other trash bin way behind, you have to prepare for the future and move along with it – or everyone else will leap ahead of you.”

He nods with drooping lids, and yawns again. I allow him to complete this yawn, at least.

“OK daddy.” He says. “Now can I go back to bed?”

I stare at him for a few more moments, while my mind returns from the future to the present.

“Yes,” I smile sadly at him. “Go back to bed. The future will wait patiently for you to grow up.”

My gaze follows him as he goes back to him room, and the smile melts from my lips. He’s still just four years old, and will learn all the skills that he needs to handle the future world as he grows up.

For him, the future will wait patiently.

For others – like those unneeded garbage workers – it’s already here.

 

Don’t Tell Me Not To Make Love with My Robot

Pepper is one of the most sophisticated household robots in existence today. It has a body shape that reminds one of a prepubescent child, only reaching a height of 120 centimeters, and with a tablet on its chest. It constantly analyzes its owner’s emotions according to their speech, facial expressions and gestures, and responds accordingly. It also learns – for example, by analyzing which modes of behavior it can enact in order to make its owner feel better. It can even use its hands to hug people.

pepper-robot-sale
Pepper the Robot. Source: The Monitor Daily.

No wonder that when the first 1,000 Pepper units were offered for sale in Japan for $1,600, they were all sold in one minute. Pepper is now the most famous household robot in the world.

Pepper is probably also the only robot you’re not allowed to have sex with.

According to the contract, written in Japanese legal speak and translated to English, users are not allowed to perform –

“(4) Acts for the purpose of sexual or indecent behavior, or for the purpose of associating with unacquainted persons of the opposite sex.”

What does this development mean? Here is the summary, in just three short points.

 

First Point: Is Pepper Being Used for Surveillance?

First, one has to wonder just how SoftBank, the robot’s distributors in Japan, is going to keep tabs on whether the robot has been sexually used or not. Since Pepper’s price includes a $200 monthly “data and insurance fee”, it’s a safe bet that every Pepper unit is transmitting some of its data back to SoftBank’s servers. That’s not necessarily a bad thing: as I’ve written in Four Robot Myths it’s Time We Let Go of, robots can no longer be seen as individual units. Instead, they are a form of a hive brain, relying on each other’s experience and insights to guide their behavior. In order to do that, they must be connected to the cloud.

This is obviously a form of surveillance. Pepper is sophisticated enough to analyze its owner’s emotions and responses, and can thus deliver a plethora of information to SoftBank, advertisers and even government authorities. The owners could probably activate a privacy mode (if there’s not a privacy mode now, it will almost certainly be added in the near future by common demand), but the rest of the time their behavior will be under close scrutiny. Not necessarily because SoftBank is actually interested in what you’re doing in your houses, but simply because it wants to improve the robots.

And, well, also because it may not want you to have sex with them.

This is where things get bizarre. It is almost certainly the case that if SoftBank wished to, it could set up a sex alarm to blare up autonomously if Pepper is repeatedly exposed to sexual acts. There doesn’t even have to be a human in the loop – just train the AI engine behind Pepper on a large enough number of porn and erotic movies, and pretty soon the robot will be able to tell by itself just what the owner is dangling in front of its cameras.

The rest of the tale is obvious: the robot will complain to SoftBank via the cloud, but will do so without sharing any pictures or videos it’s taken. In other words, it won’t share information but only its insights and understandings of what’s been going on in that house. SoftBank might issue a soft warning to the owner, asking it to act more coyly around Pepper. If such chastity alerts keep coming up, though, SoftBank might have to retrieve Pepper from that house. And almost certainly, it will not allow other Pepper units to learn from the one that has been exposed to sexual acts.

And here’s the rub: if SoftBank wants to keep on developing its robots, they must learn from each other, and thus they must be connected to the cloud. But as long as SoftBank doesn’t want them to learn how to engage in sexual acts, it will have to set some kind of a filter – meaning that the robots will have to learn to recognize sexual acts, and refuse to talk about them with other robots. And silence, in the case of an always-operational robot, is as good as any testimony.

So yes, SoftBank will know when you’re having sex with Pepper.

I’ve written extensively in the past about the meaning of private property being changed, as everything are being connected to the cloud. Tesla are selling you a car, but they still control some parts of it. Google are selling you devices for controlling your smart house – which they then can (and do) shut down from a distance. And yes, SoftBank is selling you a robot which becomes your private property – as long as you don’t do anything with it that SoftBank doesn’t like you to.

And that was only the first point.

 

Second Point: Is Sex the Answer, or the Question?

There’s been some public outrage recently about sex with robots, with an actual campaign against using robots as sex objects. I sent the leaders of the campaign, Kathleen Richardson and Erik Brilling, several questions to understand the nature of their issues with the robots. They have not answered my questions, but according to their campaign website it seems that they equate ‘robot prostitution’ with human prostitution.

“But robots don’t feel anything.” You might say now. “They don’t have feelings, or dignity of their own. Do they?”

Let’s set things straight: sexual abuse is among the most horrible things any human can do to another. The abuser is causing both temporary and permanent injury to the victim’s body and mind. That’s why we call it an abuse. But if there are no laws to protect a robot’s body, and no mind to speak of, why should we care whether someone uses a robot in a sexual way?

Richardson’s and Brilling basically claim that it doesn’t matter whether the robots are actually experiencing the joys of coitus or suffering the ignominy of prostitution. The mere fact that people will use robots in the shape of children or women for sexual release will serve to perpetuate our current society model in which women and children are being sexually abused.

Let’s approach the issue from another point of view, though. Could sex with robots actually prevent some cases of sexual abuse?

maxresdefault (1)
… Or lovers? Source: Redsilverj

Assuming that robots can provide a high-quality sexual experience to human beings, it seems reasonable that some pent-up sexual tensions can be relieved using sex robots. There are arguments that porn might actually deter sexual violence, and while the debate is nowhere near to conclusion on that point, it’s interesting to ask: if robots can actually relieve human sexual tensions, and thus deter sexual violence against other human beings – should we allow that to happen, even though it objectifies robots, any by association, women and children as well?

I would wait for more data to come in on this subject before I actually advocate for sex with robots, but in the meantime we should probably refrain from making judgement on people who have sex with robots. Who knows? It might actually serve a useful purpose even in the near future. Which brings me to the third point –

 

Third Point: Don’t You Tell Me Not to have Sex with MY Robot

And really, that’s all there is to it.

 

When Reality Changes More Quickly than Science Fiction

Brandon Sanderson is one of my favorite fantasy and science fiction authors. He is producing new books in an incredible pace, and his writing quality does not seem to suffer for it. The first book in his recent sci-fi trilogy, Steelheart from The Reckoners series, was published in September 2013. Calamity, the third and last book in the same series was published in February 2016. So just three years passed between the first and the last book in the series.

thereckonersseries.jpg
The Reckoners trilogy. Source: Brittany Zelkovich

The books themselves describe a post-apocalyptic future, around ten years away from us. In the first book, the hero lives in the most technologically advanced cities in the world, with electricity, smartphones, and sophisticated technology at his disposal. Sanderson describes sophisticated weapons used by the police forces in the city, including laser weapons and even mechanized war suits. By the third book, our hero reaches another technologically-advanced outpost of humanity, and suddenly is surrounded by weaponized aerial drones.

You may say that the first city chose not to use aerial drones, but that explanation is a bit sketchy, as anyone who has read the books can testify. Instead, it seems to me that in the three years that passed since the original book was published, aerial drones finally made a large enough impact on the general mindset, that Sanderson could no longer ignore them in his vision of a future. He realized that his readers would look askance at any vision of the future that does not include mention of aerial drones of some kind. In effect, the drones have become part of the way we think about the future. We find it difficult to imagine a future without them.

Usually, our visions of the future change relatively slowly and gradually. In the case of the drones, it seems that within three years they’ve moved from an obscure technological item to a common myth the public shares about the future.

Science fiction, then, can show us what people in the present expect the future to look like. And therein lies its downfall.

 

Where Science Fiction Fails

Science fiction can be used to help us explore alternative futures, and it does so admirably well. However, best-selling books must reach a wide audience, and to resonate with many on several different levels. In order to do that, the most popular science fiction authors cannot stray too far from our current notions. They cannot let go of our natural intuitions and core feelings: love, hate, the appreciation we have for individuality, and many others. They can explore themes in which the anti-hero, or The Enemy, defy these commonalities that we share in the present. However, if the author wants to write a really popular book, he or she will take care not to forego completely the reality we know.

Of course, many science fiction book are meant for ‘in-house’ audience: for the hard-core sci-fi audience who is eager to think beyond the box of the present. Alastair Reynolds in his Revelation Space series, for example, succeeds in writing sci-fi literature for this audience exactly. He’s writing stories that in many aspects transcend notions of individuality, love and humanity. And he’s paying the price for this transgression as his books (to the best of my knowledge) have yet to appear on the New York Times Best Seller list. Why? As one disgruntled reviewer writes about Reynolds’ book Chasm City

“I prefer reading a story where I root for the protagonist. After about a third of the way in, I was pretty disturbed by the behavior of pretty much everyone.”

Chasm_City_cover_(Amazon).jpg

Highly popular sci-fi literature is thus forced to never let go completely of present paradigms, which sadly limits its use as a tool to developing and analyzing far-away futures. On the other hand, it’s conceivable that an annual analysis of the most popular sci-fi books could provide us with an understanding of the public state-of-mind regarding the future.

Of course, there are much easier ways to determine how much hype certain technologies receive in the public sphere. It’s likely that by running data mining algorithms on the content of technological blogs and websites, we would reach better conclusions. Such algorithms can also be run practically every hours of every day. So yeah, that’s probably a more efficient route to figuring out how the public views the future of technology.

But if you’re looking for an excuse to read science fiction novels for a purely academic reason, just remember you found it in this blog post.

 

 

How I Became a Dreaded Zionist Robotic Spy, or – Why We Need a Privacy Standard for Robots

It all began in a horribly innocent fashion, as such things often do. The Center for Middle East Studies in Brown University, near my home, has held a “public discussion” about the futures of Palestinians in Israel. Naturally, as a Israeli living in the States, I’m still very much interested in this area, so I took a look at the panelist list and discovered immediately they all came from the same background and with the same point of view: Israel was the colonialist oppressor and that was pretty much all there was to it in their view.

MES 3-3-16 Critical Conversations.gif

Quite frankly, this seemed bizarre to me: how can you have a discussion about the future of a people in a region, without understanding the complexities of their geopolitical situation? How can you talk about the future in a war-torn region like the Middle East, when nobody speaks about security issues, or provides the state of mind of the Israeli citizens or government? In short, how can you have a discussion when all the panelists say exactly the same thing?

So I decided to do something about it, and therein lies my downfall.

I am the proud co-founder of TeleBuddy – a robotics services start-up company that operates telepresence robots worldwide. If you want to reach somewhere far away – Israel, California, or even China – we can place a robot there so that instead of wasting time and health flying, you can just log into the robot and be there immediately. We mainly use Double Robotics‘ robots, and since I had one free for use, I immediately thought we could use the robots to bring a representative of the Israeli point of view to the panel – in a robotic body.

Things began moving in a blur from that point. I obtained permission from Prof. Beshara Doumani, who organized the panel, to bring a robot to the place. StandWithUs – an organization that disseminates information about Israel in the United States – has graciously agreed to send a representative by the name of Shahar Azani to log into the robot, and so it happened that I came to the event with possibly the first ever robotic-diplomat.

2016-03-03 17.48.13

Things went very well in the event itself. While my robotic friend was not allowed to speak from the stage, he talked with people in the venue before the event began, and had plenty of fun. Some of the people in the event seemed excited about the robot. Others were reluctant to approach him, so he talked with other people instead. The entire thing was very civil, as other participants in the panel later remarked. I really thought we found a good use for the robot, and even suggested to the organizers that next time they could use TeleBuddy’s robots to ‘teleport’ a different representative – maybe a Palestinian – to their event. I went home happily, feeling I made just a little bit of a difference in the world and contributed to an actual discussion between the two sides in a conflict.

A few days later, Open Hillel published a statement about the event, as follows –

“In a dystopian twist, the latest development in the attack on open discourse by right-wing pro-Israel groups appears to be the use of robots to police academic discourse. At a March 3, 2016 event about Palestinian citizens of Israel sponsored by Middle East Studies at Brown University, a robot attended and accosted students. The robot used an iPad to display a man from StandWithUs, which receives funding from Israel’s government.

Before the event began, students say, the robot approached students and harassed them about why they were attending the event. Students declined to engage with this bizarre form of intimidation and ignored the robot. At the event itself, the robot and the StandWithUs affiliate remained in the back. During the question and answer session, the man briefly left the robot’s side to ask a question.

It is not yet known whether this was the first use of a robot to monitor Israel-Palestine discourse on campus. … Open Hillel opposes the attempts of groups like StandWithUs to monitor students and faculty. As a student-led grassroots campaign supported by young alumni, professors, and rabbis, Open Hillel rejects any attempt to stifle or target student or faculty activists. The use of robots for purposes of surveillance endangers the ability of students and faculty to learn and discuss this issue. We call upon outside groups such as StandWithUs to conduct themselves in accordance with the academic principles of open discourse and debate.”

 

 

I later met accidentally with some of the students who were in the event, and asked them why they believed the robot was used for surveillance, or to harass students. In return, they accused me of being a spy for the Israeli government. Why? Obviously, because I operated a “surveillance drone” on American soil. That’s perfect circular logic.

 

Lessons

There are lessons aplenty to be obtained from this bizarre incident, but the one that strikes me in particular is that you can’t easily ignore existing cultural sentiments and paradigms without taking a hit in the process. The robot was obviously not a surveillance drone, or meant for surveillance of any kind, but Open Hillel managed to rebrand it by relying on fears that have deep-roots in the American public. They did it to promote their own goals of getting some PR, and they did it so skillfully that I can’t help but applaud them for it. Quite frankly, I wish their PR guys were working for me.

That said, there are issues here that need to be dealt with if telepresence robots ever want to become part of critical discussions. The fear that the robot may be recording or taking pictures in an event is justified – a tech-savvy person controlling the robot could certainly find a way to do that. However, I can’t help but feel that there are less-clever ways to accomplish that, such as using one’s smartphone, or the covert Memoto Lifelogging camera. If you fear being recorded on public, you should know that telepresence robots are probably the least of your concerns.

 

Conclusions

The honest truth is that this is a brand new field for everyone involved. How should robots behave at conferences? Nobody knows. How should they talk with human beings at panels or public events? Nobody can tell yet. How can we make human beings feel more comfortable when they are in the same perimeter with a suit-wearing robot that can potentially record everything it sees? Nobody has any clue whatsoever.

These issues should be taken into consideration in any venture to involve robots in the public sphere.

It seems to me that we need some kind of a standard, to be developed in a collaboration between ethicists, social scientists and roboticists, which will ensure a high level of data encryption for telepresence robots and an assurance that any data collected by the robot will be deleted on the spot.

We need, in short, to develop proper robotic etiquette.

And if we fail to do that, then it shouldn’t really surprise anyone when telepresence robots are branded as “surveillance drones” used by Zionist spies.

Robit: A New Contender in the Field of House Robots

The field of house robots is abuzz in the last two years. It began with Jibo – the first cheap house robot that was originally advertised on Indiegogo and gathered nearly $4 million. Jibo doesn’t look at all like Asimov’s vision of humanoid robots. Instead, it resembles a small cartoon-like version of Eve from the Wall-E movie. Jibo can understand voice commands, recognize and track faces, and even take pictures of family members and speak and interact with them. It can do all that for just $750 – which seems like a reasonable deal for a house robot. Romo is another house robot for just $150 or so, with a cute face and a quirky attitude, which has sadly gone out of production last year.

 

robots.jpg
Pictures of house robots: Pepper (~$1,600), Jibo (~$750), Romo (~$130). Image on the right originally from That’s Really Possible.

 

Now comes a new contender in the field of house robots: Robit, “The Robot That Gets Things Done”. It moves around the house on its three wheels, wakes you up in the morning, looks after lost items like your shoes or keys on the floor, detects smoke and room temperature, and even delivers beer for you on a tray. And it’s doing all that for just $349 on Indiegogo.

robit.gif

I interviewed Shlomo Schwarcz, co-founder & CEO at Robit Robot, about Robit and the present and future of house robots. Schwarcz emphasized that unlike Jibo, Robit is not supposed to be a ‘social robot’. You’re not supposed to talk with it or have a meaningful relationship with it. Instead, it is your personal servant around the house.

“You choose the app (guard the house, watch your pet, play a game, dance, track objects, find your list keys, etc.) and Robit does it. We believe people want a Robit that can perform useful things around the house rather than just chat.”

It’s an interesting choice, and it seems that other aspects of Robit conform to it. While Jibo and Romo are pleasant to look at, Robit’s appearance can be somewhat frightening, with a head that resembles that of a human baby. The question is, can Robit actually do everything promised in the campaign? Schwarcz mentions that Robit is essentially a mobile platform that runs apps, and the developers have created apps that cover the common and basic usages: remote control from a smartphone, movement and face detection, dance, and a “find my things” app.

Other, more sophisticated apps, will probably be left for 3rd parties. These will include Robit analyzing foodstuff and determining its nutritional value, launching toy missiles at items around the house using a tiny missile launcher, and keeping watch over your cat so that it doesn’t climb on that precious sofa that used to belong to your mother in law. These are all great ideas, but they still need to be developed by 3rd parties.

This is where the Robit both wins and fails at the same time. The developers realized that no robotic device in the near future is going to be a standalone achievement. They are all going to be connected together, learn from each other and share insights by means of a virtual app market that can be updated every second. When used that way, robots everywhere can evolve much more rapidly. And as Shwarcz says –

“…Our vision [is] that people will help train robots and robots will teach each other! Assuming all Robits are connected to the cloud, one person can teach a Robit to identify, say a can and this information can be shared in the cloud and other Robits can download it and become smarter. We call these bits of data “insights”. An insight can be identifying something, understanding a situation, a proper response to an event or even just an eye and face expression. Robots can teach each other, people will vote for insights and in short time they will simply turn themselves to become more and more intelligent.”

That’s an important vision for the future, and one that I fully agree with. The only problem is that it requires the creation of an app market for a device that is not yet out there on the market and in people’s houses. The iPhone app store was an overnight success because the device reached the hands of millions in the first year to its existence, and probably because it also was an organic continuation of the iTunes brand. At the moment, though, there is no similar app management system for robots, and certainly not enough robots out there to justify the creation of such a system.

At the moment, the Robit crowdfunding campaign is progressing slowly. I hope that Robit makes it through, since it’s an innovative idea for a house robot, and definitely has potential. Whether it succeeds or fails, the campaign mainly shows that the house robots concept is one that innovators worldwide are rapidly becoming attached to, and are trying to find the best ways to implement. In twenty years from now, we’ll laugh about all the whacky ideas these innovators had, but the best of those ideas – those that survived the test of time and market – will serve us in our houses. Seen from that aspect, Shwarcz is one of those countless unsung heroes: the ones who try to make a change in a market that nobody understands, and dare greatly.

Will he succeed? That’s for the future to decide.

 

 

Images of Israeli War Machines from 2048

Do you want to know what war would look like in 2048? The Israeli artist Pavel Postovit has drawn a series of remarkable images depicting soldiers, robots and mechs – all in the service of the Israeli army in 2048. He even drew aerial ships resembling the infamous Triskelion from The Avengers (which had an unfortunate tendency to crash every second week or so).

Pavel is not the first artist to make an attempt to envision the future of war. Jakub Rozalski before him tried to reimagine World War II with robots, and Simon Stalenhag has many drawings that demonstrate what warfare could look like in the future. Their drawings, obviously, are a way to forecast possible futures and bring them to our attention.

Pavel’s drawings may not based on rigorous foresight research, but they don’t have to be. They are mainly focused on showing us one way the future may be unfurled. Pavel himself does not pretend to be a futures researcher, and told me that –

“I was influenced by all kind of different things – Elysium, District 9 [both are sci-fi movies from the last few years], and from my military service. I was in field intelligence, on the border with Syria, and was constantly exposed to all kinds of weapons, both ours and the Syrians.”

Here are a couple of drawings to make you understand Pavel’s vision of the future, divided according to categories I added. Be aware that the last picture is the most haunting of all.

 

Mechs in the Battlefield

Mechs are a form of ground vehicles with legs – much like Boston Dymanic’s Alpha Dog, which they are presumbaly based on. The most innovative of those mechs is the DreamCatcher – a unit with arms and hands that is used to collect “biological intelligence in hostile territory”. In one particularly disturbing image we can see why it’s called “DreamCatcher”, as the mech beheads a deceased human fighter and takes the head for inspection.

b93e7f27692961.5636946fc1475.jpg

Apparently, mechs in Pavel’s future are working almost autonomously – they can reach hostile areas on the battlefield and carry out complicated tasks on their own.

 

Soldiers and Aerial Drones

Soldiers in the field will be companied by aerial drones. Some of the drones will be larger than others – the Tinkerbell, for example, can serve both for recon and personal CAS (Close Air Support) for the individual soldier.

97d79927684283.5636910467ed2.jpg

Other aerial drones will be much smaller, and will be deployed as a swarm. The Blackmoth, for example, is a swarm of stealthy micro-UAVs used to gather tactical intelligence on the battlefield.

f4bb2a27684283.5636947973985.jpg

 

Technology vs. Simplicity

Throughout Pavel’s visions of the future we can see a repeated pattern: the technological prowess of the west is going to collide with the simple lifestyle of natives. Since the images depict the Israeli army, it’s obvious why the machines are essentially fighting or constraining the Palestinians. You can see in the images below what life might look like in 2048 for Arab civillians and combatants.

471c3e27692961.56369472000a8.jpg

Another interesting picture shows Arab combatants dealing with a heavily armed combat mech by trying to make it lose its balance. At the same time, one of the combatants is sitting to the side with a laptop – presumbaly trying to hack into the robot.

431d1327692961.5636946fd2add.jpg

 

The Last Image

If the images above have made you feel somewhat shaken, don’t worry – it’s perfectly normal. You’re seeing here a new kind of warfare, in which robots take extremely active parts against human beings. That’s war for you: brutal and horrible, and there’s nothing much to do against that. If robots can actually minimize the amount of suffering on the battlefield by replacing soldiers, and by carrying out tasks with minimal casualties for both sides – it might actually be better than the human-based model of war.

Perhaps that is why I find the last picture the most horrendous one. You can see in it a combatant, presumably an Arab, with a bloody machette next to him and two prisoners that he’s holding in a cage. The combatant is reading a James Bond book. The symbolism is clear: this is the new kind of terrorist / combatant. He is vicious, ruthless, and well-educated in Western culture – at least well enough to develop his own ideas for using technology to carry out his ideology. In other words, this is an ISIS combatant, who begin to employ some of the technologies of the West like aerial drones, without adhering to moral theories that restrict their use by nations.

ba9a0c31030769.563dbe5189ce8.jpg

 

Conclusion

The future of warfare in Pavel’s vision is beginning to leave the paradigm of human-on-human action, and is rapidly moving into robotic warfare. It is very difficult to think of a military future that does not include robots in it, and obviously we should start thinking right now about the consequences, and how (and whether) we can imbue robots with sufficient autonomous capabilities to carry out missions on their own, while still minimizing casualties on the enemy side.

You can check out the rest of Pavel’s (highly recommended) drawings in THIS LINK.

Four Robot Myths it’s Time We Let Go of

A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.

As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!

12489892_10206643949390298_612958140_o.jpg
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.

 

First Myth: Robots must be shaped as Humanoids

Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.

Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.

telepresence_options_robots.png
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.

 

Second Myth: Robots have a Computer for a Brain

This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.

The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.

Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.

 

This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.

 

Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.

 

Third Myth: Robots as Individual Units

This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.

Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.

The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –

“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”

tesla-model-x-elon-musk.jpg
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.

Fourth Myth: Robots can’t make Moral Decisions

In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.

Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?

These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.

And so, robots must make moral decisions.

 

Conclusion

As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?

This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?

That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!

 

Forecast: In 2016, Terrorists Will Use Aerial Drones for Terrorist Attacks – But What Will Those Drones Carry?

A year ago I wrote a short chapter for a book about emerging technologies and their impact on security, published by Yuval Ne’eman Workshop for Science, Technology & Security and curated by Deb Housen-Couriel. The chapter focused on drones and the various ways they’re being used in the hands of criminals to smuggle drugs across borders, to identify and raid urban marijuana farms operated by rival gangs, and to smuggle firearms and lifestyle luxury items over prison walls. At the end of the paper I provided a forecast: drones will soon be used by terrorists to kill people.

Well, it looks like the future is catching up with us, since a report from Syria (as covered in Popular Mechanic) has just confirmed that ISIS is using small drones as weapons, albeit not very sophisticated ones. In fact, the terrorists are simply loading the drones with explosives, and trying to smash them on the enemy forces.

That, of course, is hardly surprising to anyone who has studied the use of drones by ISIS. The organization is drawing young and resourceful Muslims from the West, some of whom have expertise with emerging technologies like 3D-printers and aerial drones. These kinds of technologies can be developed today in the garage for a few hundred dollars, so it should not surprise anyone that ISIS is using aerial drones wherever it can.

The Islamic State started using drones in 2014, but they were utilized mainly for media and surveillance purposes. Drones were used to capture some great images from battles, as well as for battlefield reconnaissance. Earlier in 2015, the U.S. has decided that ISIS drones are important enough to be targeted for destruction, and launched an airstrike to destroy a drone and its operators. In other words, the U.S. has spent tens or even hundreds of thousands of dollars in ammunition and fuel for the most expansive and sophisticated aircraft and missiles in the world, in order to destroy a drone likely costing less than one thousand dollars.

ISISDrone320-2102638084060.jpg
ISIS is using drones on the battlefield. Source: Vocativ

All of this evidence is coming in from just this year and the one before it. How can we expect drones to be used by terrorist organizations in 2016?

 

Scenarios for Aerial Drones Terrorist Attacks

In a research presented in 2013, two Dutch researchers from TNO Defence Research summed up four scenarios for malicious use of drones. Two of these scenarios are targeting civilians and would therefore count as terrorist attacks against unarmed civilians.

In the first scenario, a drone with a small machine gun is directed into a stadium, where it opens fire on the crowd. While the drone would most probably crash within a few seconds because of the backlash, the panic caused by the attack would cause many people to trample each other in their flight to safety.

In the second scenario, a drone would be used by terrorists to drop an explosive straight on the head of a politician, in the middle of a public speech. Security forces in the present are essentially helpless in the face of such a threat, and at most can order the politician into hiding as soon as they see a drone in the sky – which is obviously an impractical solution.

Both of the above scenarios have been validated in recent years, albeit in different ways. A drone was illegally flown into a stadium in the middle of a soccer game between Serbia and Albania. Instead of carrying a machine gun, the drone carried the national flag of Greater Albania – which one of the Serbian players promptly ripped down. He was assaulted immediately by the Albanian players, and soon enough the fans stormed the field, trampling over fences and policemen in the process.

 

The second scenario occurred in September 2013, in the midst of an election campaign event in Germany. A drone operated by a 23 years old man was identified taking pictures in the sky. The police ordered the operator to land the drone immediately, and he did just that and crashed the drone – intentionally or not – at the feet of German Chancellor Angela Merkel. If that drone was armed with even a small amount of explosives, the event would’ve ended in a very different fashion.

As you can understand from these examples, aerial drones can easily be used as tools for terrorist attacks. Their potential has not nearly been fulfilled, probably because terrorists are still trying to equip those lightweight drones with enough explosives and shrapnel to make an actual impact. But drones function just as well with other types of ammunition – which can be even scarier than explosives.

Here’s a particularly nasty example: sometime in 2016, in a bustling European city, you are sitting and eating peacefully in a restaurant. You see a drone flashing by, and smile and point at it, when suddenly it makes a sharp turn, dives into the restaurant and floats in the center for a few seconds. Then it sprays all the guests with a red-brown liquid: blood which the terrorists have drawn from a HIV-carrying individual. Just half a liter of blood is more than enough to decorate a room and to cover everyone’s faces. And now imagine that the same happens in ten other restaurants in that city, at the same time.

Would you, as tourists, ever come back to these restaurants? Or to that city? The damages to tourism and to morale would be disastrous – and the terrorists can make all that happen without resorting to the use of any illegal substances or equipment. No explosives at all.

 

Conclusion and Forecast

Here’s today forecast: by the year 2016, if terrorists have their wits about them (and it seems the ISIS ones certainly do, most unfortunately), they will carry out a terrorist attack utilizing drones. They may use the drones for charting out the grounds, or they may actually use the drones to carry explosives or other types of offensive materials. Regardless, drones are such an incredibly useful tool in the hands of individual terrorists that it’s impossible to believe they will not be used somehow.

How can we defend ourselves from drone terrorist attacks? In the next post I will analyze the problem using a foresight methodology called Causal Layered Analysis, in order to get to the bottom of the issue and consider possible solutions.

Till that time, if you find yourself eating in a restaurant when a drone comes in – duck quickly.

 

Kitchen of the Future Coming to Your House Soon – Or Only to the Rich?

 

You’re watching MasterChef on TV. The contestants are making their very best dishes and bring them to the judges for tasting. As the judges’ eyes roll back with pleasure, you are left sitting on your couch with your mouth watering at the praises they heap upon the tasty treats.

Well, it doesn’t have to be that way anymore. Meet Moley, the first robotic cook that might actually reach yours household.

Moley is composed mostly of two highly versatile robotic arms that repeat human motions in the kitchen. The arms can basically do anything that a human being can, and in fact receive their ‘training’ by recording highly esteemed chefs at their work. According to the company behind Moley, the robot will come equipped with more than 2,000 digital recipes installed, and will be able to enact each and every one of them with ease.

I could go on describing Moley, but a picture is worth a thousand words, and a video clip is worth around thirty thousand words a second. So take a minute of your time to watch Moley in action. You won’t regret it.

 

 

Moley is projected to get to market in 2017, and should cost around $15,000.

What impact could it have for the future? Here are a few thoughts.

 

Impact on Professional Chefs

Moley is not a chef. It is incapable of thinking up of new dishes on its own. In fact, it is not much more than a ‘monkey’ replicating every movement of the original chef. This description, however, pretty much applies to 99 percent of kitchen workers in restaurants. They spend their work hours doing exactly as the chef tells them to. As a result, they produce dishes that should be close to identical to each other.

As Moley and similar robotic kitchen assistants come into use, we will see a reduced need for cooks and kitchen workers in many restaurants. This trend will be particularly noticeable in large junk food networks like McDonald’s that have the funds to install a similar system in every branch of the network, thereby cutting their costs. And the kitchen workers in those places? Most of them will not be needed anymore.

Professional chefs, though, stand to gain a lot from Moley. In a way, food design could become very similar to creating apps for smartphones. Apps are so hugely successful because everybody has an end device – the smartphone – and can download an app immediately for a small cost. Similarly, when many kitchens make use of Moley, professional chefs can make lots of money by selling new and innovative digital recipes for just one dollar each.

 

sushi-373587_1920
Sushi for all? That is one app I can’t wait for.

 

Are We Becoming a Plutonomy?

In 2005, Citigroup sent a memo to its wealthiest clients, suggesting that the United States is rapidly turning into a plutonomy: a nation in which the wealthy and the prosperous are driving the economy, while everybody else pretty much tags along. In the words of the report –

“There is no such thing as “The U.S. Consumer” or “UK Consumer”, but rich and poor consumers in these countries… The rich are getting richer; they dominate spending. Their trend of getting richer looks unlikely to end anytime soon.”

There is much evidence to support Citigroup’s analysis, and Boston Consulting Group has reached similar conclusions when forecasting the increase in financial wealth of the super-rich in the near future. In short, it would seem that the rich keep getting richer, whereas the rest of us are not enjoying anywhere near the same pace of financial growth. It is therefore hardly surprising to find out that one of the top advices given by Citigroup in its Plutonomy Memo was basically to invest in companies and firms that provide services to the rich and the wealthy. After all, they’re the ones whose wealth keeps on increasing as time moves on. Why should companies cater to the poor and the downtrodden, when they can focus on huge gains from the top 10 percent of the population?

Moley could easily be a demonstration for a service that befits a plutonomy. At $15,000 per robot, Moley could find its place in every millionaire’s house. At the same time, it could kick out of employment many of the low-level, low-earning cooks in kitchens worldwide.

You might say, of course, that those low-level cooks would be able to compete in the new app market as well, and offer their own creations to the public. You would be correct, but consider that any digital market becomes a “winner takes all” market. There is simply no place for plenty of big winners in the app – or digital recipe – market.

Moley, then, is essentially another invention driving us closer to plutonomy.

 

And yet…

New technologies have always cost some people their livelihood, while helping many others. Matt Ridley, in his masterpiece The Rational Optimist, describes how the guilds fought relentlessly against the industrial revolution in England, even though that revolution led in a relatively short period of time to a betterment of the human condition in England. Some people lost their workplace as a result of the industrial revolution, but they found new jobs. In the meantime, everybody suddenly enjoyed from better and cheaper clothes, better products in the stores, and an overall improvement in the economy since England could export its surplus of products.

Moley and similar robots will almost certainly cost some people their workplaces, but in the meantime it has the potential to minimize the cost of food, minimize time spent on making food in the household (I’m spending 45-60 minutes every day making food for my family and me), and elevate the lifestyle quality of the general public – but only if the technology drops in price and can be deployed in many venues, including personal homes.

 

Conclusion

If it’s a forecast you want, then here it is. While we can’t know for sure whether Moley itself will conquer the market, or some other robotic company, it seems likely that as AI continues to develop and drop in prices, robots will become part of many households. I believe that the drop in prices would be significant over a period of twenty years so that almost everybody will enjoy the presence of kitchen robots in their homes.

That said, the pricing and services are not a matter of technological prowess alone, but also a social one: will the robotic companies focus on the wealthy and the rich, or will they find financial models with which to provide services for the poor as well?

This decision could shape our future as we know it, and define whether we’ll keep our headlong dive towards plutonomy.