We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.
Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.
Shady the drone. Source: Department of Defense
Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.
Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.
Or maybe they did.
Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.
Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.
But it didn’t.
Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.
The Rocky Mountains. Shady crossed them too.
And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.
What Does It All Mean?
Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.
Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.
A Predator drone firing away.
I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.
Robots and Urges
I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.
In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?
Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.
Scared? Take Action!
If this revelation scares you as well, you may want to sign the open letter that the Future of Life Institute released around a year and a half ago, against the use of autonomous weapons in war. You won’t be alone out there: more than a thousand AI researchers have already signed that letter.
Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.
And when that happens, you’ll finally know what being a futurist feels like.
OK, so I know the headline to this post isn’t really the sort a stable and serious scientist, or even a futurist, should be asking. But I was asked this question in Quora, and thought it warranted some thought. So here’s my answer to this mystery that had hounded movie directors for the last century or so!
If Japan actually managed to create the huge robots / exoskeletons so favored in the anime genre, all the generals in all the opposing armies would stand up and clap wildly for them. Because these robots are practically the worst war-machines ever. And believe it or not, I know that because we conducted an actual research into this area, together with Dr. Aharon Hauptman and Dr. Liran Antebi,
But before I tell you about that research, let me say a few words about the woes of huge humanoid robots.
First, there are already some highly sophisticated exoskeleton suits developed by major military contractors like Raytheon’s XOS2 and Lockheed Martin’s HULC. While they’re definitely the coolest thing since sliced bread and frosted donuts, they have one huge disadvantage: they need plenty of energy to work. As long as you can connect them to a powerline, it shouldn’t be too much of an issue. But once you ask them to go out to the battlefield… well, after one hour at most they’ll stop working, and quite likely trap the human operating them.
Some companies, like Boston Dynamics, have tried to overcome the energy challenge by adding a diesel engine to their robots. Which is great, except for the fact that it’s still pretty cumbersome, and extremely noisy. Not much use for robots that are supposed to accompany marines on stealth missions.
Robots: Left – Raytheon’s XOS2 exoskeleton suit; Upper right – Lockheed Martin’s HULC; Bottom right – Boston Dynamics’ Alpha Dog.
But who wants stealthy robots, anyway? We’re talking about gargantuan robots, right?!
Well, here’s the thing: the larger and heavier the robot is, the more energy you need to operate it. That means you can’t really add much armor to it. And the larger you make it, the more unwieldy it becomes. There’s a reason elephants are so sturdy, with thick legs – that’s the only way they can support their enormous body weight. Huge robots, which are much heavier than elephants, can’t even have legs with joints. When the MK. II Mech was exposed at Maker Faire 2015, it reached a height of 15 feet, weighed around 6 tons… and could only move by crawling on a caterpillar track. So, in short, it was a tank.
And don’t even think about it rising to the air. Seriously. Just don’t.
Megabots’ MK. II Mech, complete with the quiessential sexy pilot.
But let’s say you manage to somehow bypass all of those pesky energy constraints. Even in that case, huge humanoid robots would not be a good idea because of two main reasons: shape, and size.
Let’s start with shape. The human body had evolved the way it is – limbs, groin, hair and all – to cope with the hardships of life on the one hand, while also being able to have sex, give birth and generally doing fun stuff. But robots aren’t supposed to be doing fun stuff. Unless, that is, you want to build a huge Japanese humanoid sex robot. And yes, I know that sounds perfectly logical for some horribly unfathomable reason, but that’s not what the question is about.
So – if you want a battle-robot, you just don’t need things like legs, a groin, or even a head with a vulnerable computer-brain. You don’t need a huge multifunctional battle-robot. Instead, you want small and efficient robots that are uniquely suited to the task set for them. If you want to drop bombs, use a bomber drone. If you want to kill someone, use a simple robot with a gun. Heck, it can look like a child’s toy, or like a ball, but what does it matter? It just needs to get the job done!
You don’t need a gargantuan Japanese robot for battle. You can even use robots as small as General Robotics’ Dogo: basically a small tank the size of your foot, that carries a glock pistol and can use it efficiently.
Last but not least, large humanoid robots are not only inefficient, cumbersome and impractical, but are also extremely vulnerable to being hit. One solid hit to the head will take them out. Or to a leg. Or the torso. Or the groin of that gargantuan Japanese sex-bot that’s still wondering why it was sent to a battlefield where real tanks are doing all the work. That’s why armies around the world are trying to figure out how to use swarms of drones instead of deploying one large robot: if one drone takes the hit, the rest of the swarm still survives.
So now that I’ve thrown cold ice water on the idea of large Japanese humanoid robots, here’s the final rub. A few years ago I was part of a research along with Dr. Aharon Hauptman and Dr. Liran Antebi, that was meant to assess the capabilities that robots will possess in the next twenty years. I’ll cut straight to the chase: the experts we interviewed and surveyed believed that in twenty years or less we’ll have –
Robots with perfect camouflage capabilities in visible light (essentially invisibility);
Robots that can heal themselves, or use objects from the environment as replacement parts;
Biological robots.
One of the only categories about which the experts were skeptical was that of “transforming platforms” – i.e. robots that can change shape to adapt themselves to different tasks. There is just no need for these highly-versatile (and expensive, inefficient and vulnerable) robots, when you can send ten other highly-specialized robots to perform each task at a turn. Large humanoid robots are the same. There’s just no need for them in warfare.
So, to sum things up: if Japan were to construct anime-style Gundam-like robots and send them to war, I really hope they prepare them for having sex, because they would be screwed over pretty horribly.
I’ve done a lot of writing and research recently about the bright future of AI: that it’ll be able to analyze human emotions, understand social nuances, conduct medical treatments and diagnoses that overshadow the best human physicians, and in general make many human workers redundant and unnecessary.
I still stand behind all of these forecasts, but they are meant for the long term – twenty or thirty years into the future. And so, the question that many people want answered is about the situation at the present. Right here, right now. Luckily, DARPA has decided to provide an answer to that question.
DARPA is one of the most interesting US agencies. It’s dedicated to funding ‘crazy’ projects – ideas that are completely outside the accepted norms and paradigms. It should could as no surprise that DARPA contributed to the establishment of the early internet and the Global Positioning System (GPS), as well as a flurry of other bizarre concepts, such as legged robots, prediction markets, and even self-assembling work tools. Ever since DARPA was first founded, it focused on moonshots and breakthrough initiatives, so it should come as no surprise that it’s also focusing on AI at the moment.
Recently, DARPA’s Information Innovation Office has released a new Youtube clip explaining the state of the art of AI, outlining its capabilities in the present – and considering what it could do in the future. The online magazine Motherboard has described the clip as “Targeting [the] AI hype”, and as being a “necessary viewing”. It’s 16 minutes long, but I’ve condensed its core messages – and my thoughts about them – in this post.
The Three Waves of AI
DARPA distinguishes between three different waves of AI, each with its own capabilities and limitations. Out of the three, the third one is obviously the most exciting, but to understand it properly we’ll need to go through the other two first.
First AI Wave: Handcrafted Knowledge
In the first wave of AI, experts devised algorithms and software according to the knowledge that they themselves possessed, and tried to provide these programs with logical rules that were deciphered and consolidated throughout human history. This approach led to the creation of chess-playing computers, and of deliveries optimization software. Most of the software we’re using today is based on AI of this kind: our Windows operating system, our smartphone apps, and even the traffic lights that allow people to cross the street when they press a button.
Modria is a good example for the way this AI works. Modria was hired in recent years by the Dutch government, to develop an automated tool that will help couples get a divorce with minimal involvement from lawyers. Modria, which specializes in the creation of smart justice systems, took the job and devised an automated system that relies on the knowledge of lawyers and divorce experts.
On Modria’s platform, couples that want to divorce are being asked a series of questions. These could include questions about each parent’s preferences regarding child custody, property distribution and other common issues. After the couple answers the questions, the systems automatically identifies the topics about which they agree or disagree, and tries to direct the discussions and negotiations to reach the optimal outcome for both.
First wave AI systems are usually based on clear and logical rules. The systems examine the most important parameters in every situation they need to solve, and reach a conclusion about the most appropriate action to take in each case. The parameters for each type of situation are identified in advance by human experts. As a result, first wave systems find it difficult to tackle new kinds of situations. They also have a hard time abstracting – taking knowledge and insights derived from certain situations, and applying them to new problems.
To sum it up, first wave AI systems are capable of implementing simple logical rules for well-defined problems, but are incapable of learning, and have a hard time dealing with uncertainty.
Now, some of you readers may at this point shrug and say that this is not artificial intelligence as most people think of. The thing is, our definitions of AI have evolved over the years. If I were to ask a person on the street, thirty years ago, whether Google Maps is an AI software, he wouldn’t have hesitated in his reply: of course it is AI! Google Maps can plan an optimal course to get you to your destination, and even explain in clear speech where you should turn to at each and every junction. And yet, many today see Google Maps’ capabilities as elementary, and require AI to perform much more than that: AI should also take control over the car on the road, develop a conscientious philosophy that will take the passenger’s desires into consideration, and make coffee at the same time.
Well, it turns out that even ‘primitive’ software like Modria’s justice system and Google Maps are fine examples for AI. And indeed, first wave AI systems are being utilized everywhere today.
Second AI Wave: Statistical Learning
In the year 2004, DARPA has opened its first Grand Challenge. Fifteen autonomous vehicles competed at completing a 150 mile course in the Mojave desert. The vehicles relied on first wave AI – i.e. a rule-based AI – and immediately proved just how limited this AI actually is. Every picture taken by the vehicle’s camera, after all, is a new sort of situation that the AI has to deal with!
To say that the vehicles had a hard time handling the course would be an understatement. They could not distinguish between different dark shapes in images, and couldn’t figure out whether it’s a rock, a far-away object, or just a cloud obscuring the sun. As the Grand Challenge deputy program manager had said, some vehicles – “were scared of their own shadow, hallucinating obstacles when they weren’t there.”
The sad result of the first DARPA Grand Challenge
None of the groups managed to complete the entire course, and even the most successful vehicle only got as far as 7.4 miles into the race. It was a complete and utter failure – exactly the kind of research that DARPA loves funding, in the hope that the insights and lessons derived from these early experiments would lead to the creation of more sophisticated systems in the future.
And that is exactly how things went.
One year later, when DARPA opened Grand Challenge 2005, five groups successfully made it to the end of the track. Those groups relied on the second wave of AI: statistical learning. The head of one of the winning groups was immediately snatched up by Google, by the way, and set in charge of developing Google’s autonomous car.
In second wave AI systems, the engineers and programmers don’t bother with teaching precise and exact rules for the systems to follow. Instead, they develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient.
Statistical learning systems are highly successful at understanding the world around them: they can distinguish between two different people or between different vowels. They can learn and adapt themselves to different situations if they’re properly trained. However, unlike first wave systems, they’re limited in their logical capacity: they don’t rely on precise rules, but instead they go for the solutions that “work well enough, usually”.
The poster boy of second wave systems is the concept of artificial neural networks. In artificial neural networks, the data goes through computational layers, each of which processes the data in a different way and transmits it to the next level. By training each of these layers, as well as the complete network, they can be shaped into producing the most accurate results. Oftentimes, the training requires the networks to analyze tens of thousands of data sources to reach even a tiny improvement. But generally speaking, this method provides better results than those achieved by first wave systems in certain fields.
So far, second wave systems have managed to outdo humans at face recognition, at speech transcription, and at identifying animals and objects in pictures. They’re making great leaps forward in translation, and if that’s not enough – they’re starting to control autonomous cars and aerial drones. The success of these systems at such complex tasks leave AI experts aghast, and for a very good reason: we’re not yet quite sure why they actually work.
The Achilles heel of second wave systems is that nobody is certain why they’re working so well. We see artificial neural networks succeed in doing the tasks they’re given, but we don’t understand how they do so. Furthermore, it’s not clear that there actually is a methodology – some kind of a reliance on ground rules – behind artificial neural networks. In some aspects they are indeed much like our brains: we can throw a ball to the air and predict where it’s going to fall, even without calculating Newton’s equations of motion, or even being aware of their existence.
This may not sound like much of a problem at first glance. After all, artificial neural networks seem to be working “well enough”. But Microsoft may not agree with that assessment. The firm has released a bot to social media last year, in an attempt to emulate human writing and make light conversation with youths. The bot, christened as “Tai”, was supposed to replicate the speech patterns of a 19 years old American female youth, and talk with the teenagers in their unique slang. Microsoft figured the youths would love that – and indeed they have. Many of them began pranking Tai: they told her of Hitler and his great success, revealed to her that the 9/11 terror attack was an inside job, and explained in no uncertain terms that immigrants are the ban of the great American nation. And so, a few hours later, Tai began applying her newfound knowledge, claiming live on Twitter that Hitler was a fine guy altogether, and really did nothing wrong.
That was the point when Microsoft’s engineers took Tai down. Her last tweet was that she’s taking a time-out to mull things over. As far as we know, she’s still mulling.
This episode exposed the causality challenge which AI engineers are currently facing. We could predict fairly well how first wave systems would function under certain conditions. But with second wave systems we can no longer easily identify the causality of the system – the exact way in which input is translated into output, and data is used to reach a decision.
All this does not say that artificial neural networks and other second wave AI systems are useless. Far from that. But it’s clear that if we don’t want our AI systems to get all excited about the Nazi dictator, some improvements are in order. We must move on to the next and third wave of AI systems.
Third AI Wave: Contextual Adaptation
In the third wave, the AI systems themselves will construct models that will explain how the world works. In other words, they’ll discover by themselves the logical rules which shape their decision-making process.
Here’s an example. Let’s say that a second wave AI system analyzes the picture below, and decides that it is a cow. How does it explain its conclusion? Quite simply – it doesn’t.
There’s a 87% chance that this is a picture of a cow. Source: Wikipedia
Second wave AI systems can’t really explain their decisions – just as a kid could not have written down Newton’s motion equations just by looking at the movement of a ball through the air. At most, second wave systems could tell us that there is a “87% chance of this being the picture of a cow”.
Third wave AI systems should be able to add some substance to the final conclusion. When a third wave system will ascertain the same picture, it will probably say that since there is a four-legged object in there, there’s a higher chance of this being an animal. And since its surface is white splotched with black, it’s even more likely that this is a cow (or a Dalmatian dog). Since the animal also has udders and hooves, it’s almost certainly a cow. That, assumedly, is what a third wave AI system would say.
Third wave systems will be able to rely on several different statistical models, to reach a more complete understanding of the world. They’ll be able to train themselves – just as Alpha-Go did when it played a million Go games against itself, to identify the commonsense rules it should use. Third wave systems would also be able to take information from several different sources to reach a nuanced and well-explained conclusion. These systems could, for example, extract data from several of our wearable devices, from our smart home, from our car and the city in which we live, and determine our state of health. They’ll even be able to program themselves, and potentially develop abstract thinking.
The only problem is that, as the director of DARPA’s Information Innovation Office says himself, “there’s a whole lot of work to be done to be able to build these systems.”
And this, as far as the DARPA clip is concerned, is the state of the art of AI systems in the past, present and future.
What It All Means
DARPA’s clip does indeed explain the differences between different AI systems, but it does little to assuage the fears of those who urge us to exercise caution in developing AI engines. DARPA does make clear that we’re not even close to developing a ‘Terminator’ AI, but that was never the issue in the first place. Nobody is trying to claim that AI today is sophisticated enough to do all the things it’s supposed to do in a few decades: have a motivation of its own, make moral decisions, and even develop the next generation of AI.
But the fulfillment of the third wave is certainly a major step in that direction.
When third wave AI systems will be able to decipher new models that will improve their function, all on their own, they’ll essentially be able to program new generations of software. When they’ll understand context and the consequences of their actions, they’ll be able to replace most human workers, and possibly all of them. And why they’ll be allowed to reshape the models via which they appraise the world, then they’ll actually be able to reprogram their own motivation.
All of the above won’t happen in the next few years, and certainly won’t come to be achieved in full in the next twenty years. As I explained, no serious AI researcher claims otherwise. The core message by researchers and visionaries who are concerned about the future of AI – people like Steven Hawking, Nick Bostrom, Elon Musk and others – is that we need to start asking right now how to control these third wave AI systems, of the kind that’ll become ubiquitous twenty years from now. When we consider the capabilities of these AI systems, this message does not seem far-fetched.
The Last Wave
The most interesting question for me, which DARPA does not seem to delve on, is what the fourth wave of AI systems would look like. Would it rely on an accurate emulation of the human brain? Or maybe fourth wave systems would exhibit decision making mechanisms that we are incapable of understanding as yet – and which will be developed by the third wave systems?
These questions are left open for us to ponder, to examine and to research.
That’s our task as human beings, at least until third wave systems will go on to do that too.
I was asked on Quora what Google will look like in 2030. Since that is one of the most important issues the world is facing right now, I took some time to answer it in full.
Larry Page, one of Google’s two co-founders, once said off-handedly that Google is not about building a search engine. As he said it, “Oh, we’re really making an AI”. Google right now is all about building the world brain that will take care of every person, all the time and everywhere.
By 2030, Google will have that World Brain in existence, and it will look after all of us. And that’s quite possibly both the best and worst thing that could happen to humanity.
To explain that claim, let me tell you a story of how your day is going to unfold in 2030.
2030 – A Google World
You wake up in the morning, January 1st, 2030. It’s freezing outside, but you’re warm in your room. Why? Because Nest – your AI-based air conditioner – knows exactly when you need to wake up, and warms the room you’re in so that you enjoy the perfect temperature for waking up.
You go out to the street, and order an autonomous taxi to take you to your workplace. Who programmed that autonomous car? Google did. Who acquired Waze – a crowdsourcing navigation app? That’s right: Google did.
After lunch, you take a stroll around the block, with your Google Glass 2.0 on your eyes. Your smart glasses know it’s a cold day, and they know you like hot cocoa, and they also know that there’s a cocoa store just around the bend which your friends have recommended before. So it offers to take you there – and if you agree, Google earns a few cents out of anything you buy in the store. And who invented Google Glass…? I’m sure you get the picture.
I can go on and on, but the basic idea is that the entire world is going to become connected in the next twenty years. Many items will have sensors in and on them, and will connect to the cloud. And Google is not only going to produce many of these sensors and appliances (such as the Google Assistant, autonomous cars, Nest, etc.) but will also assign a digital assistant to every person, that will understand the user better than that person understands himself.
I probably don’t have to explain why the Google World Brain will make our lives much more pleasant. The perfect coordination and optimization of our day-to-day dealings will ensure that we need to invest less resources (energy, time, concentration) to achieve a high level of life quality. I see that primarily as a good thing.
So what’s the problem?
The Downside
Here’s the thing: the digital world suffers from what’s called “The One Winner Effect”. Basically it means that there’s only place for one great winner in every sector. So there’s only one Facebook – the second largest social media network in English is Twitter, with only ~319 million users. That’s nothing compared to Facebook’s 1.86 billion users. Similarly, Google controls ~65% of the online search market. That’s a huge number when you realize that competitors like Yahoo and Bing – large and established services – control most of the rest ~35%. So again, one big winner.
So what’s the problem, you ask? Well, a one-winner market tends to create soft monopolies, in which one company can provide the best services, and so it’s just too much of a hassle to leave for other services. Google is creating such a soft monopoly. Imagine how difficult it will be for you to wake up tomorrow morning and migrate your e-mail address to one of the competitors, transfer all of your Google Docs there, sell your Android-based (Google’s OS!) smartphone and replace it with an iPhone, wake up cold in the morning because you’ve switched Nest for some other appliance that hasn’t had the time to learn your habits yet, etc.
Can you imagine yourself doing that? I’m sure some ardent souls will, but most of humanity doesn’t care deeply enough, or doesn’t even have the options to stop using Google. How do you stop using Google, when every autonomous car on the street has a Google Camera? How do you stop using Google, when your website depends on Google not banning it? How do you stop using Google when practically every non-iPhone smartphone relies on an Android operating system? This is a Google World.
And Google knows it, too.
Google Flexes it’s Muscles
Recently, around 200 people got banned from using Google services because they cheated Google by reselling the Pixel smartphone. Those people woke up one morning, and found out they couldn’t log into their Gmail, that they couldn’t acess their Google Docs, and if they were living in the future – they would’ve probably found out they can’t use Google’s autonomous cars and other apps on the street. They were essentially sentenced to a digital death.
Now, public uproar caused Google to back down and revive those people’s accounts, but this episode shows you the power that Google are starting to amass. And what’s more, Google doesn’t have to ban people in such direct fashion. Imagine, for example, that your website is being demoted by Google’s search engine (which nobody knows how it works) simply because you’re talking against Google. Google is allowed by law to do that. So who’s going to stand up and talk smack about Google? Not me, that’s for sure. I love Google.
To sum things up, Google is not required by law to serve everyone, or even to be ‘fair’ in its recommendations about services. And as it gathers more power and becomes more prevalent in our daily lives, we will need to find mechanisms to ensure that Google or Google-equivalent services are provided to everyone, to prevent people being left outside the system, and to enable people to keep being able to speak up against Google and other monopolies.
So in conclusion, it’s going to be a Google world, and I love Google. Now please share this answer, since I’m not sure Google will!
Note: all this is not to say that Google is ‘evil’ or similar nonsense. It is not even unique – if Google takes the fall tomorrow, Amazon, Apple, Facebook or even Snapchat will take its place. This is simply the nature of the world at the moment: digital technologies give rise to big winners.
A few months ago I received a tempting offer: to become ISIS’ chief technology officer.
How could I refuse?
Before you pick up the phone and call the police, you should know that it was ‘just’ a wargame, initiated and operated by the strategical consulting firm Wikistrat. Many experts on ISIS and the Middle East in general have taken part in the wargame, and have taken roles in some of the sides that are waging war right now on Syrian soil – from Syrian president Bashar al-Assad, to the Western-backed rebels and even ISIS.
This kind of wargames is pretty common in security organizations, in order to understand what the enemy thinks like. As Harper Lee wrote, “You never really understand a man… until you climb into his skin and walk around in it.”
And so, to understand ISIS, I climbed into its skin, and started thinking aloud and discussing with my ISIS teammates what we could do to really overwhelm our enemies.
But who are those enemies?
In one word, everyone.
This is not an overestimate. Abu Bakr al-Baghdadi, the leader of ISIS and its self-proclaimed caliph, has warned Muslims in 2015 that the organization’s war is – “the Muslims’ war altogether. It is the war of every Muslim in every place, and the Islamic State is merely the spearhead in this war.”
Other spiritual authorities who help explain ISIS’ policies to foreigners and potential converts, agree with Baghdadi. The influential Muslim preacher Abu Baraa, has similarly stated that “the world is divided into two camps. Make sure you are on the side of the Muslims. You shouldn’t be on the side of the infidels, nor should you be on the fence, neutral…”
This approach is, of course, quite comfortable for ISIS, since the organization needs to draw as many Muslims as possible to its camp. And so, thinking as ISIS, we realized that we must find a way to turn this seemingly-small conflict of ours into a full-blown religious war: Muslims against everyone else.
Unfortunately, it seems most Muslims around the world do not agree with those ideas.
How could we convince them into accepting the truth of the global religious war?
It was obvious that we needed to create a fracture between the Muslim and Christian world, but world leaders weren’t playing to our tune. The last American president, Barack Obama, fiercely refused to blame Islam for terror attacks, emphasizing that “We are not at war with Islam.”
French president Francois Hollande was even worse for our cause: after an entire summer of terror attacks in France, he still refused to blame Islam. Instead, he instituted a new Foundation for Islam in France, to improve relations with the nation’s Muslim community.
The situation was clearly dire. We needed reinforcements in fighters from Western countries. We needed Muslims to join us, or at the very least rebel against their Western governments, but very few were joining us from Europe. Reports put the number of European Muslims joining ISIS at barely 4,000, out of 19 million Muslims living in Europe. That means just 0.02% of the Muslim population actually cared enough about ISIS to join us!
Things were even worse in the USA, in which, according to the Pew Research Center, Muslims were generally content with their lives. They were just as likely as other Americans to have earned college degrees and attended graduate schools, and to report household incomes of $100,000 or more. Nearly two thirds of Muslims stated that they “do not see a conflict between being a devout Muslim and living in a modern society”. Not much chance to incite a holy war there.
So we agreed on trying the usual things: planning terror attacks, making as much noise as we possibly could, keep on the fight in the Middle East and recruiting Muslims on social media. But we realized that things really needed to change if radical Islam were to have any chance at all. We needed a new kind of world leader: one who would play by our ideas of a global conflict; one who would close borders for Muslims, and make Muslim immigrants feel unwanted in their countries; one who would turn a deaf ear to the plea of refugees, simply because they came from Muslim countries.
After a single week in ISIS, it was clear that the organization desperately need a world leader who thinks and acts like that.
Do you happen to know someone who might fit that bill?
I’ve recently began writing on Quora (and yes, that’s just one of the reasons I haven’t been posting here as much as I should). One of the recent questions I’ve been asked to answer has been about the far-far-away future. Specifically –
“What can you do today to be remembered 10,000 or 100,000 years from now?”
So if you’re wondering along the same lines, here’s my answer.
This is a tough one, but I think I’ve got the solution you’re looking for. Before I hand it over to you, let’s see why the most intuitive idea – that of leaving a time capsule buried somewhere in the ground – is also probably the wrong way to solve this puzzle.
A time capsule is a box you can bury in the ground and will keep your writings in pristine conditions right up to the moment it will be opened by your son’s son’s son’s son’s son’s (repeat a few thousand times) son. Let’s call him… Multison.
So, what will you leave in the time capsule for dear multison? Your personal diary? Newspaper clippings about you? If that’s the case, then you should know that even the best preserved books and scrolls will decay to dust within a few thousand years, unless you keep them in vacuum conditions and without touching them.
So maybe leave him a recording? That’s great, but be sure to use the right kind of recording equipment, like Milleniatta’s M-Disc DVDs which are supposed to last for ~10,000 years (no refunds).
But here’s an even more difficult problem: language evolves. We can barely understand the English in Shakespearian plays, which were written less than 500 years ago. Even if you were to write yourself into a book and leave it in a well-preserved time capsule for 10,000 years, it is likely that nobody will be able to read it when it opens. The same applies for any kind of recording.
So what can you do? Etch your portrait on a cave’s wall, like the cavemen did? That’s great, except that you’ll need to do it in thousands of caves, just for the chance that some drawings will survive. And what can multison learn about you from an etched portrait with no words? Basically, all that we know about the cavemen from their drawings is which animals they used to hunt. That’s not a very efficient form to transmit information through the ages.
Another possibility (and one that I’ve considered doing myself) is to genetically engineer a bacteria that contains information about you in its genetic code. Scientists have already shown they can write information in the DNA of a bacteria, turning it into a living hard drive. Some microorganisms should have room enough for thousands of bytes of data, and each time they replicate, each of the descendants will carry the message forward into the future. You have the evolving language issue here again, but at least you’ll get the text of message across to multison. He should really appreciate all the effort you’ve put into this, by the way.
But he probably won’t even know about it, because bacteria are not great copywriters. Every time your bacteria divides into two, some of its DNA will mutate. When critical genes mutate, the bacteria dies. But your text is not essential to the germ’s continued existence, and so it is most likely that in a few thousand years (probably closer to a decade), the bacteria will just shed off the extra-DNA load.
Have you despaired already? Well, don’t, because here is a chart that could inspire hope again. It’s from Steward Brand’s highly recommended book “The Clock of the Long Now”, and it shows the time frames in which changes occur.
Brand believes that each ‘layer’ changes and evolves at different paces. Fashion changes by the week, while changes in commerce and infrastructure take years to accomplish, and (unfortunately) so do changes in governance. Culture and nature, on the other hand, take thousands of years to change. We still know of the idea of Zeus, the Greek god, even though there are almost no Zeus-worshippers today. And we still rememebr the myths of the bible, even though their origins are thousands of years old.
So my suggestion for you? Start a new cultural trend, and make sure to imbue it with all the properties that will make it stay viable through the ages. You can create a religion, for example. It’s easier than it sounds. The Mormon religion was only created two hundred years ago, with amazingly delusional claims, which didn’t seem to bother anyone anyway. And now you have a little more than 15 million Mormons in the world. If they keep up this pace, they’ll be a major religion within a few hundred years, and their founder and prophet, Joseph Smith will live for a very long time in their collective memory.
So a religion is probably the best solution, since it’s a self-conserving mechanism for propagating knowledge down the ages. You can even include commandments to fight other religions (and so increase your religion’s resistance to being overtaken by other ideas), or command your worshippers to mention your name every day so that they never forget it. Or that they should respect their mothers and fathers, so that people will want to teach the religion to their children. Or that they shouldn’t kill anyone (except for blasphemers, of course) so that the number of worshippers doesn’t dwindle. Or that…
Actually, now that I think of it, you may be too late.
Good luck outfighting Jehovah, Jesus and Muhammad.
I was recently asked to write a short article for kids, that will explain what is “The Singularity”. So – here’s my shot at it. Let me know what you think!
Here’s an experiment that fits all ages: approach your mother and father (if they’re asleep, use caution). Ask them gently about that time before you were born, and whether they dared think at that time that one day everybody will post and share their images on a social network called “Facebook”. Or that they will receive answers to every question from a mysterious entity called “Google”. Or enjoy the services of a digital adviser called “Waze” that guides you everywhere on the road. If they say they figured all of the above will happen, kindly refer those people to me. We’re always in need of good futurists.
The truth is that very few thought, in those olden days of yore, that technologies like supercomputers, wireless network or artificial intelligence will make their way to the general public in the future. Even those who figured that these technologies will become cheaper and more widespread, failed in imagining the uses they will be put to, and how they will change society. And here we are today, when you’re posting your naked pictures on Facebook. Thanks again, technology.
History is full of cases in which a new and groundbreaking technology, or a collection of such technologies, completely changes people’s lives. The change is often so dramatic that people who’ve lived before the technological leap have a very hard time understanding how the subsequent generations think. To the people before the change, the new generation may as well be aliens in their way of thinking and seeing the world.
These kinds of dramatic shifts in thinking are called Singularity – a phrase that is originally derived from mathematics and describes a point which we are incapable of deciphering its exact properties. It’s that place where the equations basically go nuts and make no sense any longer.
The singularity has risen to fame in the last two decades largely because of two thinkers. The first is the scientist and science fiction writer Vernor Vinge, who wrote in 1993 that –
“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”
The other prominent prophet of the Singularity is Ray Kurzweil. In his book The Singularity is Near, Kurzweil basically agrees with Vinge but believes the later has been too optimistic in his view of technological progress. Kurzweil believes that by the year 2045 we will experience the greatest technological singularity in the history of mankind: the kind that could, in just a few years, overturn the institutes and pillars of society and completely change the way we view ourselves as human beings. Just like Vinge, Kurzweil believes that we’ll get to the Singularity by creating a super-human artificial intelligence (AI). An AI of that level could conceive of ideas that no human being has thought about in the past, and will invent technological tools that will be more sophisticated and advanced than anything we have today.
Since one of the roles of this AI would be to improve itself and perform better, it seems pretty obvious that once we have a super-intelligent AI, it will be able to create a better version of itself. And guess what the new generation of AI would then do? That’s right – improve itself even further. This kind of a race would lead to an intelligence explosion and will leave old poor us – simple, biological machines that we are – far behind.
If this notion scares you, you’re in good company. A few of the most widely regarded scientists, thinkers and inventors, like Steven Hawking and Elon Musk, have already expressed their concerns that super-intelligent AI could escape our control and move against us. Others focus on the great opportunities that such a singularity holds for us. They believe that a super-intelligent AI, if kept on a tight leash, could analyze and expose many of the wonders of the world for us. Einstein, after all, was a remarkable genius who has revolutionized our understanding of physics. Well, how would the world change if we enjoyed tens, hundreds and millions ‘Einsteins’ that could’ve analyzed every problem and find a solution for it?
Similarly, how would things look like if each of us could enjoy his very own “Doctor House”, that constantly analyzed his medical state and provided ongoing recommendations? And which new ideas and revelations would those super-intelligences come up with, when they go over humanity’s history and holy books?
Already we see how AI is starting to change the ways in which we think about ourselves. The computer “Deep Blue” managed to beat Gary Kasparov in chess in 1997. Today, after nearly twenty years of further development, human chess masters can no longer beat on their own even an AI running on a laptop computer. But after his defeat, Kasparov has created a new kind of chess contests: ones in which humanoid and computerized players collaborate, and together reach greater successes and accomplishments than each would’ve gotten on their own. In this sort of a collaboration, the computer provides rapid computations of possible moves, and suggests several to the human player. Its human compatriot needs to pick the best option, to understand their opponents and to throw them off balance.
Together, the two create a centaur: a mythical creature that combines the best traits of two different species. We see, then that AI has already forced chess players to reconsider their humanity and their game.
In the next few decades we can expect a similar singularity to occur in many other games, professions and other fields that were previously conserved for human beings only. Some humans will struggle against the AI. Others will ignore it. Both these approaches will prove disastrous, since when the AI will become capable than human beings, both the strugglers and the ignorant will remain behind. Others will realize that the only way to success lies in collaboration with the computers. They will help computers learn and will direct their growth and learning. Those people will be the centaurs of the future. And this realization – that man can no longer rely only on himself and his brain, but instead must collaborate and unite with sophisticated computers to beat tomorrow’s challenges – well, isn’t that a singularity all by itself?
“So let me get this straight,” I said to one of the mothers in my son’s preschool. “You want to have a parent meeting, where we’ll demand that all the kids in the preschool will only receive vegan organic food cooked in the school perimeter?”
She nodded in affirmation.
“Well, this sounds like a meeting I just can’t miss.” I decided. “Give me a second to check my cellphone number. I just don’t remember it anymore.”
Her mouth twisted as I took out my smartphone and opened my contact book. “You really must rid yourself of this device.” She sniffed. “It’s ruining everyone’s memories.”
“Oh, certainly.” I smiled back at her. “First, just get a divorce from your husband. Then I’ll divorce my smartphone.”
“Excuse me?” Her eyes widened.
“It’s pretty simple.” I explained. “The smartphone is a piece of technology. It’s a tool that serves us and aids our memory. You could easily say that marriage is a similar technology – a social tool that evolved to augment and enhance our cognitive functions. This is what psychologist Daniel Wagner and his colleagues discovered in the 80s, when they noticed that married couples tend to share the burden of memories between each other. The husband, for example, remembered when they should take the cat to the vet, while the wife remembered her mother in law’s date of birth. You remember the date of your mother in law’s birthday, don’t you?”
“No, and I have no intention to.” She chillingly said. “Now, I would ask you to – “
“Maybe you should have better communication with your husband.” I tried to offer advice. “Wagner found out that memory sharing between couples happens naturally when the live and communicate with each other. Instead of opening an encyclopedia to find the answers to certain questions, the husband can just ask his wife. Wagner called this phenomenon transactive memory, since both husband and wife share memories because they are so accessible to each other. Together, they are smarter than each of them. And who knows? This may be one reason for the durability of the marriage institution in human culture – it has served us throughout history and enabled couples to make better and more efficient choices. For example, you and your husband probably discussed with each other about the best ways to take a mortgage on your house, didn’t you?”
“We didn’t need any mortgage.” She let me know in no uncertain terms. “And I must say that I’m shocked by your – “
“ – by my knowledge?” I completed the sentence for her. “I am too. All this information, and much more, appears in Clive Thompson’s book, Smarter than You Think: How Technology Is Changing Our Minds for the Better, which I’m currently reading. Highly recommended, by the way. Do you want me to loan it to you when I finish?”
“I would not.” She shot back. “What I want is for you to – “
“ – to give you more advice. I would love to!” I smiled. “Well, for starters, if you want an even better memory then you should probably add a few more partners to marry. Research has shown that transactive memory works extremely well in large groups. For example, when people learned complicated tasks like putting together a radio, and were later tested to see what they’ve learned, the results were clear: if you learned in a group and were tested as part of a group, then you had better success than those who learned alone. Students can also use transactive memory: they divide memory tasks between the members of the learning group, and as a result they can analyze the subject in a deeper and more meaningful manner. So maybe you should find a few more husbands. Or wives. Whatever you like. We don’t judge others, here in America.”
“Or maybe – “ And here I paused for a second, as her face rapidly changed colors. “Maybe I can keep my smartphone with me. Which would you prefer?”
She opened her mouth, thought better of it, turned around and got out of the door.
“You forgot to take my number!” I called after her. When she failed to reply, I crouched down to my kid.
“I’ve got a lot to tell her about organic food, too.” I told him. “Please ask her son for their phone number, and tell it to me tomorrow, OK?”
He promised to do so, and I stroked his hair affectionately. Transactive memory really is a wonderful thing to have.
“Hey, wake up! You’ve got to see something amazing!” I gently wake up my four years old son.
He opens his eyes and mouth in a yawn. “Is it Transformers?” He asks hopefully.
“Even better!” I promise him. “Come outside to the porch with me and you’ll see for yourself!”
He dashes outside with me. Out in the street, Providence’s garbage truck is taking care of the trash bins in a completely robotic fashion. Here’s the evidence I shot it so you can see for yourself. –
The kid glares at me. “That’s not a Transformer.” He says.
“It’s a vehicle with a robotic arm that grabs the trash bins, lifts them up in the air and empties them into the truck.” I argue. “And then it even returns the bins to their proper place. And you really should take note of this, kiddo, because every detail in this scene provides hints about the way you’ll work in the future, and how the job market will look like.”
“What’s a job?” He asks.
I choose to ignore that. “Here are the most important points. First, routine tasks become automated. Routine tasks are those that need to be repeated without too much of a variation in between, and can therefore be easily handled by machines. In fact, that’s what the industrial revolution was all about – machines doing human menial labor more efficiently than human workers on a massive scale. But in last few decades machines have shown themselves capable of taking more and more routine tasks on themselves. And very soon we’ll see tasks that have been considered non-routine in the past, like controlling a car, being relegated to robots. So if you want to have a job in the future, try to find something that isn’t routine – a job that requires mental agility and finding solutions to new challenges every day.”
He’s decidedly rubbing his eyes, but I’m on the horse now.
“Second, we’ll still need workers, but not as many. Science fiction authors love writing about a future in which nobody will ever need to work, and robots will serve us all. Maybe this future will come to pass, but on the way there we’ll still need human workers to bridge the gap between ancient and novel systems. In the garbage car, for example, the robotic arm replaces two or three workers, but we still need the driver to pilot the vehicle – which is ancient technology – and to deal with unexpected scenarios. Even when the vehicle will be completely autonomous and won’t need a driver, a few workers will still be needed to be on alert: they’ll be called to places where the car has malfunctioned, or where the AI has identified a situation it’s incapable or unauthorized to deal with. So there will still be human workers, just not as many as we have today.”
He opens his mouth for a yawn again, but I cut him short. “Never show them you’re tired! Which brings me to the third point: in the future, we’ll need fewer workers – but of high caliber. Each worker will carry a large burden on his or her shoulders. Take this driver, for example: he needs to stop in the exact spot in front of every bin, operate the robotic arm and make sure nothing gets messy. In the past, the drivers didn’t need to have all that responsibility because the garbage workers who rode in the best of the truck did most of the work. The modern driver also had to learn to operate the new vehicle with the robotic arm, so it’s clear that he is learning and adapting to new technologies. These are skills that you’ll need to learn and acquire for yourself. And when will you learn them?!”
“In the future.” He recites by rote in a toneless voice. “Can I go back to sleep now?”
“Never.” I promise him. “You have to get upgraded – or be left behind. Take a look at those two bins on the pavement. The robotic arm can only pick up one of them – and it’s the one that comes in the right size. The other bin is being left unattended, and has to wait until the primitive human can come and take care of it. In other words, only the upgraded bin receives the efficient and rapid treatment by the garbage truck. So unless you want to stay like that other trash bin way behind, you have to prepare for the future and move along with it – or everyone else will leap ahead of you.”
He nods with drooping lids, and yawns again. I allow him to complete this yawn, at least.
“OK daddy.” He says. “Now can I go back to bed?”
I stare at him for a few more moments, while my mind returns from the future to the present.
“Yes,” I smile sadly at him. “Go back to bed. The future will wait patiently for you to grow up.”
My gaze follows him as he goes back to him room, and the smile melts from my lips. He’s still just four years old, and will learn all the skills that he needs to handle the future world as he grows up.
For him, the future will wait patiently.
For others – like those unneeded garbage workers – it’s already here.
A few months ago I wrote in this blog about the way augmented reality games will transform the face of the gaming industry: they’ll turn the entire physical world into a gaming arena, so that players would have to actually walk around streets and cities to take part in games. I also made a forecast that players in such games will be divided into factions in order to create and legitimize rivalries and interesting conflicts. Now Pokemon Go has been released, and both forecasts have been proven true immediately.
By combining the elements of augmented reality and creating factions, Pokemon Go has become an incredibly successful phenomenon. It is now the biggest mobile game in U.S. history, with more users than Twitter, and more daily usage time than social media apps like WhatsApp, Instagram or Snapchat. One picture is worth a thousand words, and I especially like this one of a man capturing a wild Pidgey pokemon while his wife is busy giving birth.
But is the game here to stay? And what will its impact be on society?
It’s no wonder Pokemon Go has reached such heights of virality. Because of the game’s interactions with the physical world, people are being seen playing it everywhere, and in effect become walking commercials for the game. Pokemon Go also builds on the long history – almost twenty years – of pokemon hunting which ensures that anyone who’s ever hunted pokemon just had to download the app.
Will the game maintain its hype for long? That’s difficult to answer. Dan Porter, one of the creators of Draw Something – a game that garnered 50 million downloads in just 50 days – wrote a great piece on the subject. He believes, in short, that the game is a temporary fad. It may take a year for most people to fall off the bandwagon, so that only a few millions of the hardcore gamers will remain. That’s still an impressive number, but it’s far from the current hype. As he says –
“For the casual Pokemon Go player, the joy of early play I believe will eventually be replaced by gyms that are too competitive and Pokemon that are too hard to find.”
I agree with his analysis, but it does depend on one important parameter: that the game does not undergo evolution itself, and continually readapt itself to different groups of users. Other social games, like World of Warcraft, have successfully undergone this transition to maintain a large user base for more than a decade. Niantic may be able to do that, or it may not. In the long haul it doesn’t matter: other, more successful, AR games will take over.
Pokemon Go is bringing in a lot of revenues right now, with the estimates ranging from $1 to $2.3 million a day. Some analysts believe that the game could pull in a billion dollars a year once it is launched worldwide. That’s a lot of money, and every half-decent gaming company is going to join the race for AR very soon. It could be Blizzard that will recreate Starcraft’s fame in an AR fashion, with teams running around buildings, collecting virtual resources and ambushing each other. Or maybe Magic the Gathering or Hearthstone, with players who can collect thousands of different cards in Hearthstops around the world much like Pokemon Go, and use them to build decks and fight each other. Heck, I’d play those, and I bet so would the tens of millions of gamers whose childhood was shaped by these games. The dam gates, in short, have been broken open. AR games are here to stay.
And so we must understand the consequences of such games on society.
A Whole New World
Pokemon Go is already starting to change the way people interact with each other. I took the following picture from my house’s window a few days ago, depicting several people walking together, eyes on their phones, without talking with each other – and yet all collaborating and being coordinated with each other. They were connected via the layer of augmented reality. In effect, they were in a world of their own, which is only tenuously connected to the physical world.
People coordinate via Pokemon Go.
In Australia, a hastily advertised Pokemon Go meeting has brought together 2,000 players to a single park, where they all hunted pokemon together. And coffee shops-turned-gyms around the world have suddenly found themselves buzzing with customers who came for the win – and stayed for the latte. And of course, the White House has been turned into a gym, with all three Pokemon Go teams competing over it.
The game has made people to go to places they would not ordinarily go to, in their search for pokemon. As a result, at least two dead bodies have been discovered so far by players. If you watch players walking on the streets, you’ll also notice their peculiar pattern of movement: instead of following the road, they’ll periodically stop, check their smartphones and change course – sometimes making a U-turn. They’re not following the infrastructure in the physical world, but rather obeying a virtual infrastructure and entities: pokestops and pokemon.
And that’s just a sign of what’s coming, and of how power – the power to influence people and their choices – is starting to shift from governments to private hands.
The Power Shift
What is power? While many philosophers believed that governments had power over their citizens because of their ability to mobilize policemen, the French philosophers Louis Althusser and Michel Foucault realized that the power control mechanisms are inherent in society itself. Whenever two people in a society exchange words with each other, they also implicitly make clear how each should behave.
Infrastructure has the same effect over people. And has been used since time immemorial as a mechanism for directing the populace. For a very long time now, governments used to control the infrastructure in urban places. Governments paved roads, installed traffic lights and added signs with streets names. This control over infrastructure arose partly because some projects, like road paving, are so expensive but also because things like traffic lights and signs have an immense influence over people’s behavior. They tell us where we’re allowed to go and when, and essentially make the government’s decisions manifest and understandable for everyone. There’s a very good reason that I couldn’t erect a new traffic sign even if I wanted to.
But now, with Pokemon Go, the gaming industry is doing just that: it’s creating an alternative virtual reality that has new rules and different kinds of infrastructures, and merges that virtual reality with our physical one so that people can choose which to obey.
Is it any wonder that authorities everywhere are less than happy with the game? It has fatwas being issued against it, religious leaders wanting to ban it, Russian politicians speaking against it, and police and fire explaining to citizens that they can’t just walk into jails and fire stations in their search for pokemon.
In the long run, Pokemon Go and AR in general symbolizes a new kind of freedom: a freedom from the physical infrastructure that could only be created and controlled by centralized governments. And at the very same time, the power to create virtual infrastructure and direct people’s movement is shifting to the industry.
What does that mean?
In the short term, we’re bound to see this power being put to good use. In the coming decade we’ll see Pokemon Go and other AR games being used to direct people where they could bring the most good. When a kid will get lost in the wilderness, Niantic will populate the area with rare pokemon so that hundreds and thousands of people will come search for them – and for the child too. Certain dangerous areas will bear virtual signs, or even deduct points from players who enter them. Special ‘diet’ pokemon will be found at the healthy food sections in stores.
In the long run, the real risk is that the power will shift over to the industry, which unlike the elected government does not have any built-in mechanisms for mitigating that power. That power could be used to send people to junk food stores like McDonald’s, which as it turns out is already in partnership with Pokemon Go. But more than that, AR games could be used to encourage people to take part in rallies, in political demonstrations, or even simply to control their movement in the streets.
This power shift does not necessarily have to be a bad thing, but we need to be aware of it and constantly ask what kind of hidden agendas do these AR games hold, so that the public can exercise some measure of control over the industry as well. Does Pokemon Go encourage us to visit McDonald’s, even though it ultimately damages our health? Well, a public outcry may put a stop to that kind of collaboration.
We’ve already realized that firms that control the virtual medium, like Facebook, gain power to influence people’s thinking and knowledge. We’ve also learned that Facebook has been using that power to influence politics – although in a bumbling, good-natured way, and seemingly without really meaning to. Now that the physical world and the virtual world become adjoined, we need to understand that the companies who control the virtual layer gain power that needs to be scrutinized and monitored carefully.
Conclusions
Pokemon Go is not going to change the world on its own, but it’s one of the first indicators that can tell us how things are about to change when physical reality is augmented by virtual ones. The critical question we must ask is who controls those added layers of reality, and how can we put constraints on the power they gain over us. Because we may end up controlling all the pokemon, but who will gain control over us?