Inequality in the US

Here’s a fascinating quote from Martin Ford’s Rise of the Robots:

“Surveys have shown that most Americans vastly underestimate the existing extent of inequality, and when asked to select an “ideal” national distribution of income, they make a choice that, in the real world, exists only in Scandinavian social democracies.”

The amazing thing is that most people simply don’t realize just how bad things are. Human beings have a tendency to compare their life quality with that of their neighbors and relatives, not with the millionaires and billionaires.

Surveys show that Americans generally believe that the top 20% of wealthy Americans possess just 59 percent of wealth [source]. Or that the bottom 40% possess 9 percent of wealth. This is nowhere near the truth (actually, the top 20% possess 84 percent of wealth, and the bottom 40% possess only 0.3 percent of wealth)[source].

Here’s How Bad Things Actually Are:

  • Between the years 1983 – 2009, Americans became more wealthy as a whole. But the bottom 80 percent of income earners saw a net decrease in their wealth. At the same time, the top 1 percent of income earners got more than 40 percent of the nation’s wealth increase.[source].
  • Overall, the earnings of the top 1 percent rose by 278 percent between 1979 and 2007. At the same time, the earnings of the median people (that’s probably you and me) only increased by 35 percent [source – The Second Machine Age].
  • Inequality (as measured by the CIA according to the GINI index) in the US is far more extreme than it is in places like Egypt, Croatia, Vietnam or Greece [source].
  • Between the years 2009 – 2012, 95 percent of total income gains went to the wealthiest 1 percent [source].
  • Economic mobility in the US – i.e. whether people can rise (or sink) from one economic class to another, is significantly lower in comparison to many European countries. If you were born to a family in the bottom 20% of income, you have a 42 percent chance of staying in that income level as an adult. Compare that to Denmark (25 percent chance) or even Britain (30 percent chance) [source]. That means that the American dream of achieving success through hard work is much more practical if you’re living in a Nordic country or even in the freaking monarchy of the United Kingdom.
  • Inequality also has implications for your life expectancy. Geographic inequality in life expectancy has increased between 1980 and 2014. Some counties in the US have a life expectancy lower by 20 years than the highest counties. Yes, you read that right. The average person in eastern Kentucky and southwestern West Virginia basically has twenty years less than a person in, say, central Colorado. And the disparity between the US counties shows no sign of stopping anytime soon [source].

What It All Means

Reading these statistics, you may say that inequality is just a symptom of the times and of technological progress, and there’s definitely some evidence for that.

You may highlight the fact that the ‘water rises for everyone’, and indeed – that’s true as well. Some may rise more rapidly than others, but in general over the last one hundred years, the average American’s life quality has risen.

You may even say that some billionaires, like Bill Gates and Mark Zuckerberg, are giving back their wealth to society. The data shows that the incredibly wealthy donate around 10% of their net worth over their lifetime. And again, that’s correct (and incredibly admirable).

The only problem is, all of these explanations doesn’t matter in the end. Because inequality still exists, and it has some unfortunate side effects: people may not realize exactly how bad it is, but they still feel it’s pretty bad. They realize that the rich keep on getting richer. They understand that the rich and wealthy have a large influence on the US congress and senate [source].

In short, they understand that the system is skewed, and not in their favor.

And so, they demand change. Any kind of change – just something that will shake the system upside down, and make the wealthy elites rethink everything they know. Populist politicians (and occasionally ones who really do want to make a difference) then use these yearnings to get elected.

Indeed, when you check out the candidate quality that mattered the most to voters in the 2016 US elections, you can see that the ability to bring about change is more important by far than other traits like “good judgement”, “experience” or even “cares about me”. And there you have it: from rampant inequality to the Trump regime.

Now, things may not be as bleak as they seem. Maybe Trump will work towards minimizing inequality. But even if he won’t (or can’t), I would like to think that the politicial system in the US has learned its lesson, and that the Democratic Party realized that in the next elections cycle they need to put inequality on their agenda, and find ways to fight it.

Do you think I’m hoping for too much?

———————-
Cover image from the Economist

Who’ll Win the Next War: the Tank or the Geek?

I was asked on Quora how the tanks of the future are going to be designed. Here’s my answer – I hope it’ll make you reflect once again on the future of war and what it entails.

And now, consider this: the Israeli Merkava Mark IV tank.

Merkava4_MichaelMass02.jpg
Merkava Mark IV. Source: Michael Mass, Yad La-Shiryon, found on Wikipedia

It is one of the most technologically advanced tanks in the world. It is armed with a massive 120 mm smoothbore gun that fires shells with immense explosive power, with two roof-mounted machine guns, and with a 60 mm mortar in case the soldiers inside really want to make a point. However, the tank has to be deployed on the field, and needs to reach its target. It also costs around $6 million.

Now consider this: the Israeli geek (picture taken from the Israeli reality show – Beauty and the Geek). The geek is the one on the left, in case you weren’t sure.

120101803_0937500_0..jpg
The common Israeli Geek. He’s the one on the left of the picture. Source: Israeli reality show – Beauty and the Geek.

With the click of a button and the aid of some hacking software available on the Darknet, our humble Israeli geek can paralyze whole institutions, governments and critical infrastructures. He can derail trains (happened in Poland), deactivate sewage pumps and mix contaminated water with drinking water (happened in Texas), or even cut the power supply to tens of thousands of people (happened in Ukraine). And if that isn’t bad enough, he could take control over the enemy female citizens’ wireless vibrators and operate it to his and/or their satisfaction (potentially happened already).

Oh, and the Israeli geek works for free. Why? Because he loves hacking stuff. Just make sure you cover the licensing costs for the software he’s using, or he might hack your vibrator next.

So, you asked – “how will futuristic tanks be designed”?

I answer, “who cares”?

 

But Seriously Now…

When you’re thinking of the future, you have to realize that some paradigms are going to change. One of those paradigms is that of physical warfare. You see, tanks were created to do battle in a physical age, in which they had an important role: to protect troops and provide overwhelming firepower while bringing those troops wherever they needed to be. That was essentially the German blizkrieg strategy.

In the digital age, however, everything is connected to the internet, or very soon will be. Not just every computer, but every bridge, every building, every power plant and energy grid, and every car. And as security futurist Marc Goodman noted in his book Future Crimes, “when everything is connected, everything is vulnerable”. Any piece of infrastructure that you connect to the internet, immediately becomes vulnerable to hacking.

Now, here’s a question for you: what is the purpose of war?

I’ll give you a hint: it’s not about driving tanks with roaring engines around. It’s not about soldiers running and shooting in the field. It’s not even about dropping bombs from airplanes. All of the above are just tools for achieving the real purpose: winning the war by either making the enemy surrender to you, or neutralizing it completely.

And how do you neutralize the enemy? It’s quite simple: you demolish the enemy’s factories; you destroy their cities; you ruin your enemy’s citizens morale to the point where they can’t fight you anymore.

In the physical age, armies clashed on the field because each army was on the way to the other side’s cities and territory. That’s why you needed fast tanks with awesome armanent and armor. But today, in the digital age, hackers can leap straight over the battlefield, and make war directly between cities in real-time. They can shut down hospitals and power plants, kill everyone with a heart pacemaker or an insulin pump, and make trains and cars collide with each other. In short, they could shut down entire cities.

So again – who needs tanks?

 

And Still…

I’m not saying there aren’t going to be tanks. The physical aspect of warfare still counts, and one can’t just disregard it. However, tanks simply don’t count as much in comparison to the cyber-security aspects of warfare (partly because tanks themselves are connected nowadays).

Again, that does not mean that tanks are useless. We still need to figure out the exact relationships between tanks and geeks, and precisely where, when and how needs to be deployed in the new digital age. But if you were to ask me in ten years what’s more important – the tank or the geek – then my bet would definitely be on the geek.

 


If this aspect of future warfare interests you, I invite you to read the two papers I’ve published in the European Journal of Futures Research and in Foresight, about future scenarios for crime and terror that rely on the internet of things.

Should You Consider Fate when Planning Ahead?

I was recently asked on Quora whether there is some kind of a grand scheme to things: a destiny that we all share, a guiding hand that acts according to some kind of moral rules.

This is a great question, and one that we’re all worried about. While there’s no way to know for sure, the evidence points against this kind of fate-biased thinking – as a forecasting experiment funded by the US Department of Defense recently showed.

In 2011, the US Department of Defense began funding an unusual project: the Good Judgement Project. In this project, led by Philip E. Tetlock, Barbara Mellers and Don Moore, people were asked to volunteer their time and rate the chance of occurence for certain events. Overall, thousands of people took part in the exercise, and answered hundreds of questions over a time period of two years. Their answers were checked constantly, as soon as the events actually occurred.

After two years, the directors of the project identified a sub-type of people they called Superforecasters. These top forecasters were doing so well, that their predictions were 30% more accurate than those of intelligence officials who had access to highly classified information!

(and yes, for the statistics-lovers among us: the researchers absolutely did run statistical tests that showed the chances of those people being accidentally so accurate were miniscule. The superforecasters kept doing well, over and over again)

Once the researchers identified this subset of people, they began analyzing their personalities and methods of thinking. You can read about it in some of the papers about the research (attached at the end of this answer), as well as in the great book – Superforecasting: the Art and Science of Prediction. For this answer, the important thing to note is that those superforecasters were also tested for what I call “the fate bias”.

Neither one seems to work. Sorry ’bout that.

The Fate Bias

There’s no denying that most people believe in fate of some sort: a guiding hand that makes everything happen for a reason, in accordance with some grand scheme or moral rules. This tendency seems to manifest itself most strongly in children, and in God-believers (84.8 percent of whom believe in fate), but even 54.3 percent of atheists believe in fate.

It’s obvious why we want to believe in fate. It gives our woes, and the sufferings of others, a special meaning. It justifies our pains, and makes us think that “it’s all for a reason”. Our belief in fate helps us deal with bereavement and with physical and mental pain.

But it also makes us lousy forecasters.

 

Fate is Incompatible with Accurate Forecasting

In the Good Judgement Project, the researchers ran tests on the participants to check for their belief in fate. They found out that the superforecasters utterly rejected fate. Even more significantly, the better an individual was at forecasting, the more inclined he was to reject fate. And the more he rejected fate, the more accurate he was at forecasting the future.

 

Fate is Incompatible with the Evidence

And so, it seems that fate is simply incompatible with the evidence. People who try to predict the occurrence of events in a ‘fateful’ way, as if they obeying a certain guiding hand, are prone to failure. On the other hand, those who believe there is no ‘higher order to things’ and plan accordingly, turn out to be usually right.

Does that mean there is no such thing as fate, or a grand scheme? Of course not. We can never disprove the existence of such a ‘grand plan’. What we can say with some certainty, however, is that human beings who claim to know what that plan actually is, seem to be constantly wrong – whereas those who don’t bother explaining things via fate, find out that reality agrees with them time and time again.

So there may be a grand plan. We may be in a movie, or God may be looking down on us from up above. But if that’s the case, it’s a god we don’t understand, and the plan – if there actually is one – is completely undecipherable to us. As Neil Gaiman and the late Terry Pratchett beautifully wrote –

God does not play dice with the universe; He plays an ineffable game of His own devising… an obscure and complex version of poker in a pitch-dark room, with blank cards, for infinite stakes, with a Dealer who won’t tell you the rules, and who smiles all the time.

And if that’s the case, I’d rather just say outloud – “I don’t believe in fate”, and plan and invest accordingly.

You’ll simply have better success that way. And when the universe is cheating at poker with blank cards, Heaven knows you need all the help you can get.

 


 

For further reading, here are links to some interesting papers about the Good Judgement Project and the insights derived from it –

Bringing probability judgments into policy debates via forecasting tournaments

Superforecasting: How to Upgrade Your Company’s Judgment

Identifying and Cultivating Superforecasters as a Method of Improving Probabilistic Predictions

Psychological Strategies for Winning a Geopolitical Forecasting Tournament

Rethinking the training of intelligence analysts

 

The Little Military Drone that Could

We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.

Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.

shadow uav.jpg
Shady the drone. Source: Department of Defense

Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.

Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.

Or maybe they did.

Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.

Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.

But it didn’t.

Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.

rocky_mountains_sml.jpg
The Rocky Mountains. Shady crossed them too.

And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.

 

What Does It All Mean?

Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.

Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.

PredatorFire.png
A Predator drone firing away. 

I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.

 

Robots and Urges

I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.

In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?

Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.

 

Scared? Take Action!

If this revelation scares you as well, you may want to sign the open letter that the Future of Life Institute released around a year and a half ago, against the use of autonomous weapons in war. You won’t be alone out there: more than a thousand AI researchers have already signed that letter.

Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.

And when that happens, you’ll finally know what being a futurist feels like.

 

 

 

Should We Actually Use Huge Japanese Robots in Warfare?

OK, so I know the headline to this post isn’t really the sort a stable and serious scientist, or even a futurist, should be asking. But I was asked this question in Quora, and thought it warranted some thought. So here’s my answer to this mystery that had hounded movie directors for the last century or so!

If Japan actually managed to create the huge robots / exoskeletons so favored in the anime genre, all the generals in all the opposing armies would stand up and clap wildly for them. Because these robots are practically the worst war-machines ever. And believe it or not, I know that because we conducted an actual research into this area, together with Dr. Aharon Hauptman and Dr. Liran Antebi,

But before I tell you about that research, let me say a few words about the woes of huge humanoid robots.

First, there are already some highly sophisticated exoskeleton suits developed by major military contractors like Raytheon’s XOS2 and Lockheed Martin’s HULC. While they’re definitely the coolest thing since sliced bread and frosted donuts, they have one huge disadvantage: they need plenty of energy to work. As long as you can connect them to a powerline, it shouldn’t be too much of an issue. But once you ask them to go out to the battlefield… well, after one hour at most they’ll stop working, and quite likely trap the human operating them.

Some companies, like Boston Dynamics, have tried to overcome the energy challenge by adding a diesel engine to their robots. Which is great, except for the fact that it’s still pretty cumbersome, and extremely noisy. Not much use for robots that are supposed to accompany marines on stealth missions.

Robots: Left – Raytheon’s XOS2 exoskeleton suit; Upper right – Lockheed Martin’s HULC; Bottom right – Boston Dynamics’ Alpha Dog.

 

But who wants stealthy robots, anyway? We’re talking about gargantuan robots, right?!

Well, here’s the thing: the larger and heavier the robot is, the more energy you need to operate it. That means you can’t really add much armor to it. And the larger you make it, the more unwieldy it becomes. There’s a reason elephants are so sturdy, with thick legs – that’s the only way they can support their enormous body weight. Huge robots, which are much heavier than elephants, can’t even have legs with joints. When the MK. II Mech was exposed at Maker Faire 2015, it reached a height of 15 feet, weighed around 6 tons… and could only move by crawling on a caterpillar track. So, in short, it was a tank.

And don’t even think about it rising to the air. Seriously. Just don’t.

Megabots’ MK. II Mech, complete with the quiessential sexy pilot.

But let’s say you manage to somehow bypass all of those pesky energy constraints. Even in that case, huge humanoid robots would not be a good idea because of two main reasons: shape, and size.

Let’s start with shape. The human body had evolved the way it is – limbs, groin, hair and all – to cope with the hardships of life on the one hand, while also being able to have sex, give birth and generally doing fun stuff. But robots aren’t supposed to be doing fun stuff. Unless, that is, you want to build a huge Japanese humanoid sex robot. And yes, I know that sounds perfectly logical for some horribly unfathomable reason, but that’s not what the question is about.

So – if you want a battle-robot, you just don’t need things like legs, a groin, or even a head with a vulnerable computer-brain. You don’t need a huge multifunctional battle-robot. Instead, you want small and efficient robots that are uniquely suited to the task set for them. If you want to drop bombs, use a bomber drone. If you want to kill someone, use a simple robot with a gun. Heck, it can look like a child’s toy, or like a ball, but what does it matter? It just needs to get the job done!

You don’t need a gargantuan Japanese robot for battle. You can even use robots as small as General Robotics’ Dogo: basically a small tank the size of your foot, that carries a glock pistol and can use it efficiently.

Last but not least, large humanoid robots are not only inefficient, cumbersome and impractical, but are also extremely vulnerable to being hit. One solid hit to the head will take them out. Or to a leg. Or the torso. Or the groin of that gargantuan Japanese sex-bot that’s still wondering why it was sent to a battlefield where real tanks are doing all the work. That’s why armies around the world are trying to figure out how to use swarms of drones instead of deploying one large robot: if one drone takes the hit, the rest of the swarm still survives.

So now that I’ve thrown cold ice water on the idea of large Japanese humanoid robots, here’s the final rub. A few years ago I was part of a research along with Dr. Aharon Hauptman and Dr. Liran Antebi, that was meant to assess the capabilities that robots will possess in the next twenty years. I’ll cut straight to the chase: the experts we interviewed and surveyed believed that in twenty years or less we’ll have –

  • Robots with perfect camouflage capabilities in visible light (essentially invisibility);
  • Robots that can heal themselves, or use objects from the environment as replacement parts;
  • Biological robots.

One of the only categories about which the experts were skeptical was that of “transforming platforms” – i.e. robots that can change shape to adapt themselves to different tasks. There is just no need for these highly-versatile (and expensive, inefficient and vulnerable) robots, when you can send ten other highly-specialized robots to perform each task at a turn. Large humanoid robots are the same. There’s just no need for them in warfare.

So, to sum things up: if Japan were to construct anime-style Gundam-like robots and send them to war, I really hope they prepare them for having sex, because they would be screwed over pretty horribly.

The three AI waves that will shape the future

I’ve done a lot of writing and research recently about the bright future of AI: that it’ll be able to analyze human emotions, understand social nuances, conduct medical treatments and diagnoses that overshadow the best human physicians, and in general make many human workers redundant and unnecessary.

I still stand behind all of these forecasts, but they are meant for the long term – twenty or thirty years into the future. And so, the question that many people want answered is about the situation at the present. Right here, right now. Luckily, DARPA has decided to provide an answer to that question.

DARPA is one of the most interesting US agencies. It’s dedicated to funding ‘crazy’ projects – ideas that are completely outside the accepted norms and paradigms. It should could as no surprise that DARPA contributed to the establishment of the early internet and the Global Positioning System (GPS), as well as a flurry of other bizarre concepts, such as legged robots, prediction markets, and even self-assembling work tools. Ever since DARPA was first founded, it focused on moonshots and breakthrough initiatives, so it should come as no surprise that it’s also focusing on AI at the moment.

Recently, DARPA’s Information Innovation Office has released a new Youtube clip explaining the state of the art of AI, outlining its capabilities in the present – and considering what it could do in the future. The online magazine Motherboard has described the clip as “Targeting [the] AI hype”, and as being a “necessary viewing”. It’s 16 minutes long, but I’ve condensed its core messages – and my thoughts about them – in this post.

The Three Waves of AI

DARPA distinguishes between three different waves of AI, each with its own capabilities and limitations. Out of the three, the third one is obviously the most exciting, but to understand it properly we’ll need to go through the other two first.

First AI Wave: Handcrafted Knowledge

In the first wave of AI, experts devised algorithms and software according to the knowledge that they themselves possessed, and tried to provide these programs with logical rules that were deciphered and consolidated throughout human history. This approach led to the creation of chess-playing computers, and of deliveries optimization software. Most of the software we’re using today is based on AI of this kind: our Windows operating system, our smartphone apps, and even the traffic lights that allow people to cross the street when they press a button.

Modria is a good example for the way this AI works. Modria was hired in recent years by the Dutch government, to develop an automated tool that will help couples get a divorce with minimal involvement from lawyers. Modria, which specializes in the creation of smart justice systems, took the job and devised an automated system that relies on the knowledge of lawyers and divorce experts.

On Modria’s platform, couples that want to divorce are being asked a series of questions. These could include questions about each parent’s preferences regarding child custody, property distribution and other common issues. After the couple answers the questions, the systems automatically identifies the topics about which they agree or disagree, and tries to direct the discussions and negotiations to reach the optimal outcome for both.

First wave AI systems are usually based on clear and logical rules. The systems examine the most important parameters in every situation they need to solve, and reach a conclusion about the most appropriate action to take in each case. The parameters for each type of situation are identified in advance by human experts. As a result, first wave systems find it difficult to tackle new kinds of situations. They also have a hard time abstracting – taking knowledge and insights derived from certain situations, and applying them to new problems.

To sum it up, first wave AI systems are capable of implementing simple logical rules for well-defined problems, but are incapable of learning, and have a hard time dealing with uncertainty.

Now, some of you readers may at this point shrug and say that this is not artificial intelligence as most people think of. The thing is, our definitions of AI have evolved over the years. If I were to ask a person on the street, thirty years ago, whether Google Maps is an AI software, he wouldn’t have hesitated in his reply: of course it is AI! Google Maps can plan an optimal course to get you to your destination, and even explain in clear speech where you should turn to at each and every junction. And yet, many today see Google Maps’ capabilities as elementary, and require AI to perform much more than that: AI should also take control over the car on the road, develop a conscientious philosophy that will take the passenger’s desires into consideration, and make coffee at the same time.

Well, it turns out that even ‘primitive’ software like Modria’s justice system and Google Maps are fine examples for AI. And indeed, first wave AI systems are being utilized everywhere today.

Second AI Wave: Statistical Learning

In the year 2004, DARPA has opened its first Grand Challenge. Fifteen autonomous vehicles competed at completing a 150 mile course in the Mojave desert. The vehicles relied on first wave AI – i.e. a rule-based AI – and immediately proved just how limited this AI actually is. Every picture taken by the vehicle’s camera, after all, is a new sort of situation that the AI has to deal with!

To say that the vehicles had a hard time handling the course would be an understatement. They could not distinguish between different dark shapes in images, and couldn’t figure out whether it’s a rock, a far-away object, or just a cloud obscuring the sun. As the Grand Challenge deputy program manager had said, some vehicles – “were scared of their own shadow, hallucinating obstacles when they weren’t there.”

darpa-grand-challenge-car.jpeg
The sad result of the first DARPA Grand Challenge

None of the groups managed to complete the entire course, and even the most successful vehicle only got as far as 7.4 miles into the race. It was a complete and utter failure – exactly the kind of research that DARPA loves funding, in the hope that the insights and lessons derived from these early experiments would lead to the creation of more sophisticated systems in the future.

And that is exactly how things went.

One year later, when DARPA opened Grand Challenge 2005, five groups successfully made it to the end of the track. Those groups relied on the second wave of AI: statistical learning. The head of one of the winning groups was immediately snatched up by Google, by the way, and set in charge of developing Google’s autonomous car.

In second wave AI systems, the engineers and programmers don’t bother with teaching precise and exact rules for the systems to follow. Instead, they develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient.

Statistical learning systems are highly successful at understanding the world around them: they can distinguish between two different people or between different vowels. They can learn and adapt themselves to different situations if they’re properly trained. However, unlike first wave systems, they’re limited in their logical capacity: they don’t rely on precise rules, but instead they go for the solutions that “work well enough, usually”.

The poster boy of second wave systems is the concept of artificial neural networks. In artificial neural networks, the data goes through computational layers, each of which processes the data in a different way and transmits it to the next level. By training each of these layers, as well as the complete network, they can be shaped into producing the most accurate results. Oftentimes, the training requires the networks to analyze tens of thousands of data sources to reach even a tiny improvement. But generally speaking, this method provides better results than those achieved by first wave systems in certain fields.

So far, second wave systems have managed to outdo humans at face recognition, at speech transcription, and at identifying animals and objects in pictures. They’re making great leaps forward in translation, and if that’s not enough – they’re starting to control autonomous cars and aerial drones. The success of these systems at such complex tasks leave AI experts aghast, and for a very good reason: we’re not yet quite sure why they actually work.

The Achilles heel of second wave systems is that nobody is certain why they’re working so well. We see artificial neural networks succeed in doing the tasks they’re given, but we don’t understand how they do so. Furthermore, it’s not clear that there actually is a methodology – some kind of a reliance on ground rules – behind artificial neural networks. In some aspects they are indeed much like our brains: we can throw a ball to the air and predict where it’s going to fall, even without calculating Newton’s equations of motion, or even being aware of their existence.

This may not sound like much of a problem at first glance. After all, artificial neural networks seem to be working “well enough”. But Microsoft may not agree with that assessment. The firm has released a bot to social media last year, in an attempt to emulate human writing and make light conversation with youths. The bot, christened as “Tai”, was supposed to replicate the speech patterns of a 19 years old American female youth, and talk with the teenagers in their unique slang. Microsoft figured the youths would love that – and indeed they have. Many of them began pranking Tai: they told her of Hitler and his great success, revealed to her that the 9/11 terror attack was an inside job, and explained in no uncertain terms that immigrants are the ban of the great American nation. And so, a few hours later, Tai began applying her newfound knowledge, claiming live on Twitter that Hitler was a fine guy altogether, and really did nothing wrong.

parentsproudest-640x266

That was the point when Microsoft’s engineers took Tai down. Her last tweet was that she’s taking a time-out to mull things over. As far as we know, she’s still mulling.

This episode exposed the causality challenge which AI engineers are currently facing. We could predict fairly well how first wave systems would function under certain conditions. But with second wave systems we can no longer easily identify the causality of the system – the exact way in which input is translated into output, and data is used to reach a decision.

All this does not say that artificial neural networks and other second wave AI systems are useless. Far from that. But it’s clear that if we don’t want our AI systems to get all excited about the Nazi dictator, some improvements are in order. We must move on to the next and third wave of AI systems.

Third AI Wave: Contextual Adaptation

In the third wave, the AI systems themselves will construct models that will explain how the world works. In other words, they’ll discover by themselves the logical rules which shape their decision-making process.

Here’s an example. Let’s say that a second wave AI system analyzes the picture below, and decides that it is a cow. How does it explain its conclusion? Quite simply – it doesn’t.

cow_female_black_white
There’s a 87% chance that this is a picture of a cow. Source: Wikipedia

Second wave AI systems can’t really explain their decisions – just as a kid could not have written down Newton’s motion equations just by looking at the movement of a ball through the air. At most, second wave systems could tell us that there is a “87% chance of this being the picture of a cow”.

Third wave AI systems should be able to add some substance to the final conclusion. When a third wave system will ascertain the same picture, it will probably say that since there is a four-legged object in there, there’s a higher chance of this being an animal. And since its surface is white splotched with black, it’s even more likely that this is a cow (or a Dalmatian dog). Since the animal also has udders and hooves, it’s almost certainly a cow. That, assumedly, is what a third wave AI system would say.

Third wave systems will be able to rely on several different statistical models, to reach a more complete understanding of the world. They’ll be able to train themselves – just as Alpha-Go did when it played a million Go games against itself, to identify the commonsense rules it should use. Third wave systems would also be able to take information from several different sources to reach a nuanced and well-explained conclusion. These systems could, for example, extract data from several of our wearable devices, from our smart home, from our car and the city in which we live, and determine our state of health. They’ll even be able to program themselves, and potentially develop abstract thinking.

The only problem is that, as the director of DARPA’s Information Innovation Office says himself, “there’s a whole lot of work to be done to be able to build these systems.”

And this, as far as the DARPA clip is concerned, is the state of the art of AI systems in the past, present and future.

What It All Means

DARPA’s clip does indeed explain the differences between different AI systems, but it does little to assuage the fears of those who urge us to exercise caution in developing AI engines. DARPA does make clear that we’re not even close to developing a ‘Terminator’ AI, but that was never the issue in the first place. Nobody is trying to claim that AI today is sophisticated enough to do all the things it’s supposed to do in a few decades: have a motivation of its own, make moral decisions, and even develop the next generation of AI.

But the fulfillment of the third wave is certainly a major step in that direction.

When third wave AI systems will be able to decipher new models that will improve their function, all on their own, they’ll essentially be able to program new generations of software. When they’ll understand context and the consequences of their actions, they’ll be able to replace most human workers, and possibly all of them. And why they’ll be allowed to reshape the models via which they appraise the world, then they’ll actually be able to reprogram their own motivation.

All of the above won’t happen in the next few years, and certainly won’t come to be achieved in full in the next twenty years. As I explained, no serious AI researcher claims otherwise. The core message by researchers and visionaries who are concerned about the future of AI – people like Steven Hawking, Nick Bostrom, Elon Musk and others – is that we need to start asking right now how to control these third wave AI systems, of the kind that’ll become ubiquitous twenty years from now. When we consider the capabilities of these AI systems, this message does not seem far-fetched.

The Last Wave

The most interesting question for me, which DARPA does not seem to delve on, is what the fourth wave of AI systems would look like. Would it rely on an accurate emulation of the human brain? Or maybe fourth wave systems would exhibit decision making mechanisms that we are incapable of understanding as yet – and which will be developed by the third wave systems?

These questions are left open for us to ponder, to examine and to research.

That’s our task as human beings, at least until third wave systems will go on to do that too.

What Will Google Look Like in 2030?

I was asked on Quora what Google will look like in 2030. Since that is one of the most important issues the world is facing right now, I took some time to answer it in full. 

Larry Page, one of Google’s two co-founders, once said off-handedly that Google is not about building a search engine. As he said it, “Oh, we’re really making an AI”. Google right now is all about building the world brain that will take care of every person, all the time and everywhere.

By 2030, Google will have that World Brain in existence, and it will look after all of us. And that’s quite possibly both the best and worst thing that could happen to humanity.

To explain that claim, let me tell you a story of how your day is going to unfold in 2030.

2030 – A Google World

You wake up in the morning, January 1st, 2030. It’s freezing outside, but you’re warm in your room. Why? Because Nest – your AI-based air conditioner – knows exactly when you need to wake up, and warms the room you’re in so that you enjoy the perfect temperature for waking up.

And who acquired Nest three years ago for $3.2 billion USD? Google did.

Google-buys-Nest-Labs-750x400.jpg
Google acquired Nest for $3.2 billion USD. Source: Fang Digital Marketing

You go out to the street, and order an autonomous taxi to take you to your workplace. Who programmed that autonomous car? Google did. Who acquired Waze – a crowdsourcing navigation app? That’s right: Google did.

After lunch, you take a stroll around the block, with your Google Glass 2.0 on your eyes. Your smart glasses know it’s a cold day, and they know you like hot cocoa, and they also know that there’s a cocoa store just around the bend which your friends have recommended before. So it offers to take you there – and if you agree, Google earns a few cents out of anything you buy in the store. And who invented Google Glass…? I’m sure you get the picture.

I can go on and on, but the basic idea is that the entire world is going to become connected in the next twenty years. Many items will have sensors in and on them, and will connect to the cloud. And Google is not only going to produce many of these sensors and appliances (such as the Google Assistant, autonomous cars, Nest, etc.) but will also assign a digital assistant to every person, that will understand the user better than that person understands himself.

its-a-google-world-650x300-themereflex.jpg

It’s a Google World. Source: ThemeReflex

The Upside

I probably don’t have to explain why the Google World Brain will make our lives much more pleasant. The perfect coordination and optimization of our day-to-day dealings will ensure that we need to invest less resources (energy, time, concentration) to achieve a high level of life quality. I see that primarily as a good thing.

So what’s the problem?

The Downside

Here’s the thing: the digital world suffers from what’s called “The One Winner Effect”. Basically it means that there’s only place for one great winner in every sector. So there’s only one Facebook – the second largest social media network in English is Twitter, with only ~319 million users. That’s nothing compared to Facebook’s 1.86 billion users. Similarly, Google controls ~65% of the online search market. That’s a huge number when you realize that competitors like Yahoo and Bing – large and established services – control most of the rest ~35%. So again, one big winner.

So what’s the problem, you ask? Well, a one-winner market tends to create soft monopolies, in which one company can provide the best services, and so it’s just too much of a hassle to leave for other services. Google is creating such a soft monopoly. Imagine how difficult it will be for you to wake up tomorrow morning and migrate your e-mail address to one of the competitors, transfer all of your Google Docs there, sell your Android-based (Google’s OS!) smartphone and replace it with an iPhone, wake up cold in the morning because you’ve switched Nest for some other appliance that hasn’t had the time to learn your habits yet, etc.

Can you imagine yourself doing that? I’m sure some ardent souls will, but most of humanity doesn’t care deeply enough, or doesn’t even have the options to stop using Google. How do you stop using Google, when every autonomous car on the street has a Google Camera? How do you stop using Google, when your website depends on Google not banning it? How do you stop using Google when practically every non-iPhone smartphone relies on an Android operating system? This is a Google World.

And Google knows it, too.

Google Flexes it’s Muscles

Recently, around 200 people got banned from using Google services because they cheated Google by reselling the Pixel smartphone. Those people woke up one morning, and found out they couldn’t log into their Gmail, that they couldn’t acess their Google Docs, and if they were living in the future – they would’ve probably found out they can’t use Google’s autonomous cars and other apps on the street. They were essentially sentenced to a digital death.

Now, public uproar caused Google to back down and revive those people’s accounts, but this episode shows you the power that Google are starting to amass. And what’s more, Google doesn’t have to ban people in such direct fashion. Imagine, for example, that your website is being demoted by Google’s search engine (which nobody knows how it works) simply because you’re talking against Google. Google is allowed by law to do that. So who’s going to stand up and talk smack about Google? Not me, that’s for sure. I love Google.

To sum things up, Google is not required by law to serve everyone, or even to be ‘fair’ in its recommendations about services. And as it gathers more power and becomes more prevalent in our daily lives, we will need to find mechanisms to ensure that Google or Google-equivalent services are provided to everyone, to prevent people being left outside the system, and to enable people to keep being able to speak up against Google and other monopolies.

So in conclusion, it’s going to be a Google world, and I love Google. Now please share this answer, since I’m not sure Google will!

Note: all this is not to say that Google is ‘evil’ or similar nonsense. It is not even unique – if Google takes the fall tomorrow, Amazon, Apple, Facebook or even Snapchat will take its place. This is simply the nature of the world at the moment: digital technologies give rise to big winners. 

Things I’ve Learned as ISIS’ Chief Technology Officer; Or – Why ISIS Loves Trump

A few months ago I received a tempting offer: to become ISIS’ chief technology officer.

How could I refuse?

Before you pick up the phone and call the police, you should know that it was ‘just’ a wargame, initiated and operated by the strategical consulting firm Wikistrat. Many experts on ISIS and the Middle East in general have taken part in the wargame, and have taken roles in some of the sides that are waging war right now on Syrian soil – from Syrian president Bashar al-Assad, to the Western-backed rebels and even ISIS.

This kind of wargames is pretty common in security organizations, in order to understand what the enemy thinks like. As Harper Lee wrote, “You never really understand a man… until you climb into his skin and walk around in it.”

And so, to understand ISIS, I climbed into its skin, and started thinking aloud and discussing with my ISIS teammates what we could do to really overwhelm our enemies.

But who are those enemies?

In one word, everyone.

This is not an overestimate. Abu Bakr al-Baghdadi, the leader of ISIS and its self-proclaimed caliph, has warned Muslims in 2015 that the organization’s war is – “the Muslims’ war altogether. It is the war of every Muslim in every place, and the Islamic State is merely the spearhead in this war.”

Other spiritual authorities who help explain ISIS’ policies to foreigners and potential converts, agree with Baghdadi. The influential Muslim preacher Abu Baraa, has similarly stated that “the world is divided into two camps. Make sure you are on the side of the Muslims. You shouldn’t be on the side of the infidels, nor should you be on the fence, neutral…”

This approach is, of course, quite comfortable for ISIS, since the organization needs to draw as many Muslims as possible to its camp. And so, thinking as ISIS, we realized that we must find a way to turn this seemingly-small conflict of ours into a full-blown religious war: Muslims against everyone else.

Unfortunately, it seems most Muslims around the world do not agree with those ideas.

How could we convince them into accepting the truth of the global religious war?

It was obvious that we needed to create a fracture between the Muslim and Christian world, but world leaders weren’t playing to our tune. The last American president, Barack Obama, fiercely refused to blame Islam for terror attacks, emphasizing that “We are not at war with Islam.”

French president Francois Hollande was even worse for our cause: after an entire summer of terror attacks in France, he still refused to blame Islam. Instead, he instituted a new Foundation for Islam in France, to improve relations with the nation’s Muslim community.

The situation was clearly dire. We needed reinforcements in fighters from Western countries. We needed Muslims to join us, or at the very least rebel against their Western governments, but very few were joining us from Europe. Reports put the number of European Muslims joining ISIS at barely 4,000, out of 19 million Muslims living in Europe. That means just 0.02% of the Muslim population actually cared enough about ISIS to join us!

Things were even worse in the USA, in which, according to the Pew Research Center, Muslims were generally content with their lives. They were just as likely as other Americans to have earned college degrees and attended graduate schools, and to report household incomes of $100,000 or more. Nearly two thirds of Muslims stated that they “do not see a conflict between being a devout Muslim and living in a modern society”. Not much chance to incite a holy war there.

So we agreed on trying the usual things: planning terror attacks, making as much noise as we possibly could, keep on the fight in the Middle East and recruiting Muslims on social media. But we realized that things really needed to change if radical Islam were to have any chance at all. We needed a new kind of world leader: one who would play by our ideas of a global conflict; one who would close borders for Muslims, and make Muslim immigrants feel unwanted in their countries; one who would turn a deaf ear to the plea of refugees, simply because they came from Muslim countries.

After a single week in ISIS, it was clear that the organization desperately need a world leader who thinks and acts like that.

Do you happen to know someone who might fit that bill?

donald_trump_rips_cnn8217s_chris-8ff4094ef9922bde598f212bb5bd485b

What Can You Do Today to be Remembered for the Next 100,000 Years?

I’ve recently began writing on Quora (and yes, that’s just one of the reasons I haven’t been posting here as much as I should). One of the recent questions I’ve been asked to answer has been about the far-far-away future. Specifically –

“What can you do today to be remembered 10,000 or 100,000 years from now?”

So if you’re wondering along the same lines, here’s my answer.


This is a tough one, but I think I’ve got the solution you’re looking for. Before I hand it over to you, let’s see why the most intuitive idea – that of leaving a time capsule buried somewhere in the ground – is also probably the wrong way to solve this puzzle.

A time capsule is a box you can bury in the ground and will keep your writings in pristine conditions right up to the moment it will be opened by your son’s son’s son’s son’s son’s (repeat a few thousand times) son. Let’s call him… Multison.

So, what will you leave in the time capsule for dear multison? Your personal diary? Newspaper clippings about you? If that’s the case, then you should know that even the best preserved books and scrolls will decay to dust within a few thousand years, unless you keep them in vacuum conditions and without touching them.

So maybe leave him a recording? That’s great, but be sure to use the right kind of recording equipment, like Milleniatta’s M-Disc DVDs which are supposed to last for ~10,000 years (no refunds).

But here’s an even more difficult problem: language evolves. We can barely understand the English in Shakespearian plays, which were written less than 500 years ago. Even if you were to write yourself into a book and leave it in a well-preserved time capsule for 10,000 years, it is likely that nobody will be able to read it when it opens. The same applies for any kind of recording.

So what can you do? Etch your portrait on a cave’s wall, like the cavemen did? That’s great, except that you’ll need to do it in thousands of caves, just for the chance that some drawings will survive. And what can multison learn about you from an etched portrait with no words? Basically, all that we know about the cavemen from their drawings is which animals they used to hunt. That’s not a very efficient form to transmit information through the ages.

Another possibility (and one that I’ve considered doing myself) is to genetically engineer a bacteria that contains information about you in its genetic code. Scientists have already shown they can write information in the DNA of a bacteria, turning it into a living hard drive. Some microorganisms should have room enough for thousands of bytes of data, and each time they replicate, each of the descendants will carry the message forward into the future. You have the evolving language issue here again, but at least you’ll get the text of message across to multison. He should really appreciate all the effort you’ve put into this, by the way.

But he probably won’t even know about it, because bacteria are not great copywriters. Every time your bacteria divides into two, some of its DNA will mutate. When critical genes mutate, the bacteria dies. But your text is not essential to the germ’s continued existence, and so it is most likely that in a few thousand years (probably closer to a decade), the bacteria will just shed off the extra-DNA load.

Have you despaired already? Well, don’t, because here is a chart that could inspire hope again. It’s from Steward Brand’s highly recommended book “The Clock of the Long Now”, and it shows the time frames in which changes occur.

Brand believes that each ‘layer’ changes and evolves at different paces. Fashion changes by the week, while changes in commerce and infrastructure take years to accomplish, and (unfortunately) so do changes in governance. Culture and nature, on the other hand, take thousands of years to change. We still know of the idea of Zeus, the Greek god, even though there are almost no Zeus-worshippers today. And we still rememebr the myths of the bible, even though their origins are thousands of years old.

So my suggestion for you? Start a new cultural trend, and make sure to imbue it with all the properties that will make it stay viable through the ages. You can create a religion, for example. It’s easier than it sounds. The Mormon religion was only created two hundred years ago, with amazingly delusional claims, which didn’t seem to bother anyone anyway. And now you have a little more than 15 million Mormons in the world. If they keep up this pace, they’ll be a major religion within a few hundred years, and their founder and prophet, Joseph Smith will live for a very long time in their collective memory.

So a religion is probably the best solution, since it’s a self-conserving mechanism for propagating knowledge down the ages. You can even include commandments to fight other religions (and so increase your religion’s resistance to being overtaken by other ideas), or command your worshippers to mention your name every day so that they never forget it. Or that they should respect their mothers and fathers, so that people will want to teach the religion to their children. Or that they shouldn’t kill anyone (except for blasphemers, of course) so that the number of worshippers doesn’t dwindle. Or that…

Actually, now that I think of it, you may be too late.

Good luck outfighting Jehovah, Jesus and Muhammad.


Source for featured image: Neon Poisoning blog

The Singularity: What It Means for Us

I was recently asked to write a short article for kids, that will explain what is “The Singularity”. So – here’s my shot at it. Let me know what you think!

 

Here’s an experiment that fits all ages: approach your mother and father (if they’re asleep, use caution). Ask them gently about that time before you were born, and whether they dared think at that time that one day everybody will post and share their images on a social network called “Facebook”. Or that they will receive answers to every question from a mysterious entity called “Google”. Or enjoy the services of a digital adviser called “Waze” that guides you everywhere on the road. If they say they figured all of the above will happen, kindly refer those people to me. We’re always in need of good futurists.

The truth is that very few thought, in those olden days of yore, that technologies like supercomputers, wireless network or artificial intelligence will make their way to the general public in the future. Even those who figured that these technologies will become cheaper and more widespread, failed in imagining the uses they will be put to, and how they will change society. And here we are today, when you’re posting your naked pictures on Facebook. Thanks again, technology.

History is full of cases in which a new and groundbreaking technology, or a collection of such technologies, completely changes people’s lives. The change is often so dramatic that people who’ve lived before the technological leap have a very hard time understanding how the subsequent generations think. To the people before the change, the new generation may as well be aliens in their way of thinking and seeing the world.

These kinds of dramatic shifts in thinking are called Singularity – a phrase that is originally derived from mathematics and describes a point which we are incapable of deciphering its exact properties. It’s that place where the equations basically go nuts and make no sense any longer.

The singularity has risen to fame in the last two decades largely because of two thinkers. The first is the scientist and science fiction writer Vernor Vinge, who wrote in 1993 that –

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

The other prominent prophet of the Singularity is Ray Kurzweil. In his book The Singularity is Near, Kurzweil basically agrees with Vinge but believes the later has been too optimistic in his view of technological progress. Kurzweil believes that by the year 2045 we will experience the greatest technological singularity in the history of mankind: the kind that could, in just a few years, overturn the institutes and pillars of society and completely change the way we view ourselves as human beings. Just like Vinge, Kurzweil believes that we’ll get to the Singularity by creating a super-human artificial intelligence (AI). An AI of that level could conceive of ideas that no human being has thought about in the past, and will invent technological tools that will be more sophisticated and advanced than anything we have today.

Since one of the roles of this AI would be to improve itself and perform better, it seems pretty obvious that once we have a super-intelligent AI, it will be able to create a better version of itself. And guess what the new generation of AI would then do? That’s right – improve itself even further. This kind of a race would lead to an intelligence explosion and will leave old poor us – simple, biological machines that we are – far behind.

If this notion scares you, you’re in good company. A few of the most widely regarded scientists, thinkers and inventors, like Steven Hawking and Elon Musk, have already expressed their concerns that super-intelligent AI could escape our control and move against us. Others focus on the great opportunities that such a singularity holds for us. They believe that a super-intelligent AI, if kept on a tight leash, could analyze and expose many of the wonders of the world for us. Einstein, after all, was a remarkable genius who has revolutionized our understanding of physics. Well, how would the world change if we enjoyed tens, hundreds and millions ‘Einsteins’ that could’ve analyzed every problem and find a solution for it?

Similarly, how would things look like if each of us could enjoy his very own “Doctor House”, that constantly analyzed his medical state and provided ongoing recommendations? And which new ideas and revelations would those super-intelligences come up with, when they go over humanity’s history and holy books?

Already we see how AI is starting to change the ways in which we think about ourselves. The computer “Deep Blue” managed to beat Gary Kasparov in chess in 1997. Today, after nearly twenty years of further development, human chess masters can no longer beat on their own even an AI running on a laptop computer. But after his defeat, Kasparov has created a new kind of chess contests: ones in which humanoid and computerized players collaborate, and together reach greater successes and accomplishments than each would’ve gotten on their own. In this sort of a collaboration, the computer provides rapid computations of possible moves, and suggests several to the human player. Its human compatriot needs to pick the best option, to understand their opponents and to throw them off balance.

Together, the two create a centaur: a mythical creature that combines the best traits of two different species. We see, then that AI has already forced chess players to reconsider their humanity and their game.

In the next few decades we can expect a similar singularity to occur in many other games, professions and other fields that were previously conserved for human beings only. Some humans will struggle against the AI. Others will ignore it. Both these approaches will prove disastrous, since when the AI will become capable than human beings, both the strugglers and the ignorant will remain behind. Others will realize that the only way to success lies in collaboration with the computers. They will help computers learn and will direct their growth and learning. Those people will be the centaurs of the future. And this realization – that man can no longer rely only on himself and his brain, but instead must collaborate and unite with sophisticated computers to beat tomorrow’s challenges – well, isn’t that a singularity all by itself?