The three AI waves that will shape the future

I’ve done a lot of writing and research recently about the bright future of AI: that it’ll be able to analyze human emotions, understand social nuances, conduct medical treatments and diagnoses that overshadow the best human physicians, and in general make many human workers redundant and unnecessary.

I still stand behind all of these forecasts, but they are meant for the long term – twenty or thirty years into the future. And so, the question that many people want answered is about the situation at the present. Right here, right now. Luckily, DARPA has decided to provide an answer to that question.

DARPA is one of the most interesting US agencies. It’s dedicated to funding ‘crazy’ projects – ideas that are completely outside the accepted norms and paradigms. It should could as no surprise that DARPA contributed to the establishment of the early internet and the Global Positioning System (GPS), as well as a flurry of other bizarre concepts, such as legged robots, prediction markets, and even self-assembling work tools. Ever since DARPA was first founded, it focused on moonshots and breakthrough initiatives, so it should come as no surprise that it’s also focusing on AI at the moment.

Recently, DARPA’s Information Innovation Office has released a new Youtube clip explaining the state of the art of AI, outlining its capabilities in the present – and considering what it could do in the future. The online magazine Motherboard has described the clip as “Targeting [the] AI hype”, and as being a “necessary viewing”. It’s 16 minutes long, but I’ve condensed its core messages – and my thoughts about them – in this post.

The Three Waves of AI

DARPA distinguishes between three different waves of AI, each with its own capabilities and limitations. Out of the three, the third one is obviously the most exciting, but to understand it properly we’ll need to go through the other two first.

First AI Wave: Handcrafted Knowledge

In the first wave of AI, experts devised algorithms and software according to the knowledge that they themselves possessed, and tried to provide these programs with logical rules that were deciphered and consolidated throughout human history. This approach led to the creation of chess-playing computers, and of deliveries optimization software. Most of the software we’re using today is based on AI of this kind: our Windows operating system, our smartphone apps, and even the traffic lights that allow people to cross the street when they press a button.

Modria is a good example for the way this AI works. Modria was hired in recent years by the Dutch government, to develop an automated tool that will help couples get a divorce with minimal involvement from lawyers. Modria, which specializes in the creation of smart justice systems, took the job and devised an automated system that relies on the knowledge of lawyers and divorce experts.

On Modria’s platform, couples that want to divorce are being asked a series of questions. These could include questions about each parent’s preferences regarding child custody, property distribution and other common issues. After the couple answers the questions, the systems automatically identifies the topics about which they agree or disagree, and tries to direct the discussions and negotiations to reach the optimal outcome for both.

First wave AI systems are usually based on clear and logical rules. The systems examine the most important parameters in every situation they need to solve, and reach a conclusion about the most appropriate action to take in each case. The parameters for each type of situation are identified in advance by human experts. As a result, first wave systems find it difficult to tackle new kinds of situations. They also have a hard time abstracting – taking knowledge and insights derived from certain situations, and applying them to new problems.

To sum it up, first wave AI systems are capable of implementing simple logical rules for well-defined problems, but are incapable of learning, and have a hard time dealing with uncertainty.

Now, some of you readers may at this point shrug and say that this is not artificial intelligence as most people think of. The thing is, our definitions of AI have evolved over the years. If I were to ask a person on the street, thirty years ago, whether Google Maps is an AI software, he wouldn’t have hesitated in his reply: of course it is AI! Google Maps can plan an optimal course to get you to your destination, and even explain in clear speech where you should turn to at each and every junction. And yet, many today see Google Maps’ capabilities as elementary, and require AI to perform much more than that: AI should also take control over the car on the road, develop a conscientious philosophy that will take the passenger’s desires into consideration, and make coffee at the same time.

Well, it turns out that even ‘primitive’ software like Modria’s justice system and Google Maps are fine examples for AI. And indeed, first wave AI systems are being utilized everywhere today.

Second AI Wave: Statistical Learning

In the year 2004, DARPA has opened its first Grand Challenge. Fifteen autonomous vehicles competed at completing a 150 mile course in the Mojave desert. The vehicles relied on first wave AI – i.e. a rule-based AI – and immediately proved just how limited this AI actually is. Every picture taken by the vehicle’s camera, after all, is a new sort of situation that the AI has to deal with!

To say that the vehicles had a hard time handling the course would be an understatement. They could not distinguish between different dark shapes in images, and couldn’t figure out whether it’s a rock, a far-away object, or just a cloud obscuring the sun. As the Grand Challenge deputy program manager had said, some vehicles – “were scared of their own shadow, hallucinating obstacles when they weren’t there.”

darpa-grand-challenge-car.jpeg
The sad result of the first DARPA Grand Challenge

None of the groups managed to complete the entire course, and even the most successful vehicle only got as far as 7.4 miles into the race. It was a complete and utter failure – exactly the kind of research that DARPA loves funding, in the hope that the insights and lessons derived from these early experiments would lead to the creation of more sophisticated systems in the future.

And that is exactly how things went.

One year later, when DARPA opened Grand Challenge 2005, five groups successfully made it to the end of the track. Those groups relied on the second wave of AI: statistical learning. The head of one of the winning groups was immediately snatched up by Google, by the way, and set in charge of developing Google’s autonomous car.

In second wave AI systems, the engineers and programmers don’t bother with teaching precise and exact rules for the systems to follow. Instead, they develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient.

Statistical learning systems are highly successful at understanding the world around them: they can distinguish between two different people or between different vowels. They can learn and adapt themselves to different situations if they’re properly trained. However, unlike first wave systems, they’re limited in their logical capacity: they don’t rely on precise rules, but instead they go for the solutions that “work well enough, usually”.

The poster boy of second wave systems is the concept of artificial neural networks. In artificial neural networks, the data goes through computational layers, each of which processes the data in a different way and transmits it to the next level. By training each of these layers, as well as the complete network, they can be shaped into producing the most accurate results. Oftentimes, the training requires the networks to analyze tens of thousands of data sources to reach even a tiny improvement. But generally speaking, this method provides better results than those achieved by first wave systems in certain fields.

So far, second wave systems have managed to outdo humans at face recognition, at speech transcription, and at identifying animals and objects in pictures. They’re making great leaps forward in translation, and if that’s not enough – they’re starting to control autonomous cars and aerial drones. The success of these systems at such complex tasks leave AI experts aghast, and for a very good reason: we’re not yet quite sure why they actually work.

The Achilles heel of second wave systems is that nobody is certain why they’re working so well. We see artificial neural networks succeed in doing the tasks they’re given, but we don’t understand how they do so. Furthermore, it’s not clear that there actually is a methodology – some kind of a reliance on ground rules – behind artificial neural networks. In some aspects they are indeed much like our brains: we can throw a ball to the air and predict where it’s going to fall, even without calculating Newton’s equations of motion, or even being aware of their existence.

This may not sound like much of a problem at first glance. After all, artificial neural networks seem to be working “well enough”. But Microsoft may not agree with that assessment. The firm has released a bot to social media last year, in an attempt to emulate human writing and make light conversation with youths. The bot, christened as “Tai”, was supposed to replicate the speech patterns of a 19 years old American female youth, and talk with the teenagers in their unique slang. Microsoft figured the youths would love that – and indeed they have. Many of them began pranking Tai: they told her of Hitler and his great success, revealed to her that the 9/11 terror attack was an inside job, and explained in no uncertain terms that immigrants are the ban of the great American nation. And so, a few hours later, Tai began applying her newfound knowledge, claiming live on Twitter that Hitler was a fine guy altogether, and really did nothing wrong.

parentsproudest-640x266

That was the point when Microsoft’s engineers took Tai down. Her last tweet was that she’s taking a time-out to mull things over. As far as we know, she’s still mulling.

This episode exposed the causality challenge which AI engineers are currently facing. We could predict fairly well how first wave systems would function under certain conditions. But with second wave systems we can no longer easily identify the causality of the system – the exact way in which input is translated into output, and data is used to reach a decision.

All this does not say that artificial neural networks and other second wave AI systems are useless. Far from that. But it’s clear that if we don’t want our AI systems to get all excited about the Nazi dictator, some improvements are in order. We must move on to the next and third wave of AI systems.

Third AI Wave: Contextual Adaptation

In the third wave, the AI systems themselves will construct models that will explain how the world works. In other words, they’ll discover by themselves the logical rules which shape their decision-making process.

Here’s an example. Let’s say that a second wave AI system analyzes the picture below, and decides that it is a cow. How does it explain its conclusion? Quite simply – it doesn’t.

cow_female_black_white
There’s a 87% chance that this is a picture of a cow. Source: Wikipedia

Second wave AI systems can’t really explain their decisions – just as a kid could not have written down Newton’s motion equations just by looking at the movement of a ball through the air. At most, second wave systems could tell us that there is a “87% chance of this being the picture of a cow”.

Third wave AI systems should be able to add some substance to the final conclusion. When a third wave system will ascertain the same picture, it will probably say that since there is a four-legged object in there, there’s a higher chance of this being an animal. And since its surface is white splotched with black, it’s even more likely that this is a cow (or a Dalmatian dog). Since the animal also has udders and hooves, it’s almost certainly a cow. That, assumedly, is what a third wave AI system would say.

Third wave systems will be able to rely on several different statistical models, to reach a more complete understanding of the world. They’ll be able to train themselves – just as Alpha-Go did when it played a million Go games against itself, to identify the commonsense rules it should use. Third wave systems would also be able to take information from several different sources to reach a nuanced and well-explained conclusion. These systems could, for example, extract data from several of our wearable devices, from our smart home, from our car and the city in which we live, and determine our state of health. They’ll even be able to program themselves, and potentially develop abstract thinking.

The only problem is that, as the director of DARPA’s Information Innovation Office says himself, “there’s a whole lot of work to be done to be able to build these systems.”

And this, as far as the DARPA clip is concerned, is the state of the art of AI systems in the past, present and future.

What It All Means

DARPA’s clip does indeed explain the differences between different AI systems, but it does little to assuage the fears of those who urge us to exercise caution in developing AI engines. DARPA does make clear that we’re not even close to developing a ‘Terminator’ AI, but that was never the issue in the first place. Nobody is trying to claim that AI today is sophisticated enough to do all the things it’s supposed to do in a few decades: have a motivation of its own, make moral decisions, and even develop the next generation of AI.

But the fulfillment of the third wave is certainly a major step in that direction.

When third wave AI systems will be able to decipher new models that will improve their function, all on their own, they’ll essentially be able to program new generations of software. When they’ll understand context and the consequences of their actions, they’ll be able to replace most human workers, and possibly all of them. And why they’ll be allowed to reshape the models via which they appraise the world, then they’ll actually be able to reprogram their own motivation.

All of the above won’t happen in the next few years, and certainly won’t come to be achieved in full in the next twenty years. As I explained, no serious AI researcher claims otherwise. The core message by researchers and visionaries who are concerned about the future of AI – people like Steven Hawking, Nick Bostrom, Elon Musk and others – is that we need to start asking right now how to control these third wave AI systems, of the kind that’ll become ubiquitous twenty years from now. When we consider the capabilities of these AI systems, this message does not seem far-fetched.

The Last Wave

The most interesting question for me, which DARPA does not seem to delve on, is what the fourth wave of AI systems would look like. Would it rely on an accurate emulation of the human brain? Or maybe fourth wave systems would exhibit decision making mechanisms that we are incapable of understanding as yet – and which will be developed by the third wave systems?

These questions are left open for us to ponder, to examine and to research.

That’s our task as human beings, at least until third wave systems will go on to do that too.

What Will Google Look Like in 2030?

I was asked on Quora what Google will look like in 2030. Since that is one of the most important issues the world is facing right now, I took some time to answer it in full. 

Larry Page, one of Google’s two co-founders, once said off-handedly that Google is not about building a search engine. As he said it, “Oh, we’re really making an AI”. Google right now is all about building the world brain that will take care of every person, all the time and everywhere.

By 2030, Google will have that World Brain in existence, and it will look after all of us. And that’s quite possibly both the best and worst thing that could happen to humanity.

To explain that claim, let me tell you a story of how your day is going to unfold in 2030.

2030 – A Google World

You wake up in the morning, January 1st, 2030. It’s freezing outside, but you’re warm in your room. Why? Because Nest – your AI-based air conditioner – knows exactly when you need to wake up, and warms the room you’re in so that you enjoy the perfect temperature for waking up.

And who acquired Nest three years ago for $3.2 billion USD? Google did.

Google-buys-Nest-Labs-750x400.jpg
Google acquired Nest for $3.2 billion USD. Source: Fang Digital Marketing

You go out to the street, and order an autonomous taxi to take you to your workplace. Who programmed that autonomous car? Google did. Who acquired Waze – a crowdsourcing navigation app? That’s right: Google did.

After lunch, you take a stroll around the block, with your Google Glass 2.0 on your eyes. Your smart glasses know it’s a cold day, and they know you like hot cocoa, and they also know that there’s a cocoa store just around the bend which your friends have recommended before. So it offers to take you there – and if you agree, Google earns a few cents out of anything you buy in the store. And who invented Google Glass…? I’m sure you get the picture.

I can go on and on, but the basic idea is that the entire world is going to become connected in the next twenty years. Many items will have sensors in and on them, and will connect to the cloud. And Google is not only going to produce many of these sensors and appliances (such as the Google Assistant, autonomous cars, Nest, etc.) but will also assign a digital assistant to every person, that will understand the user better than that person understands himself.

its-a-google-world-650x300-themereflex.jpg

It’s a Google World. Source: ThemeReflex

The Upside

I probably don’t have to explain why the Google World Brain will make our lives much more pleasant. The perfect coordination and optimization of our day-to-day dealings will ensure that we need to invest less resources (energy, time, concentration) to achieve a high level of life quality. I see that primarily as a good thing.

So what’s the problem?

The Downside

Here’s the thing: the digital world suffers from what’s called “The One Winner Effect”. Basically it means that there’s only place for one great winner in every sector. So there’s only one Facebook – the second largest social media network in English is Twitter, with only ~319 million users. That’s nothing compared to Facebook’s 1.86 billion users. Similarly, Google controls ~65% of the online search market. That’s a huge number when you realize that competitors like Yahoo and Bing – large and established services – control most of the rest ~35%. So again, one big winner.

So what’s the problem, you ask? Well, a one-winner market tends to create soft monopolies, in which one company can provide the best services, and so it’s just too much of a hassle to leave for other services. Google is creating such a soft monopoly. Imagine how difficult it will be for you to wake up tomorrow morning and migrate your e-mail address to one of the competitors, transfer all of your Google Docs there, sell your Android-based (Google’s OS!) smartphone and replace it with an iPhone, wake up cold in the morning because you’ve switched Nest for some other appliance that hasn’t had the time to learn your habits yet, etc.

Can you imagine yourself doing that? I’m sure some ardent souls will, but most of humanity doesn’t care deeply enough, or doesn’t even have the options to stop using Google. How do you stop using Google, when every autonomous car on the street has a Google Camera? How do you stop using Google, when your website depends on Google not banning it? How do you stop using Google when practically every non-iPhone smartphone relies on an Android operating system? This is a Google World.

And Google knows it, too.

Google Flexes it’s Muscles

Recently, around 200 people got banned from using Google services because they cheated Google by reselling the Pixel smartphone. Those people woke up one morning, and found out they couldn’t log into their Gmail, that they couldn’t acess their Google Docs, and if they were living in the future – they would’ve probably found out they can’t use Google’s autonomous cars and other apps on the street. They were essentially sentenced to a digital death.

Now, public uproar caused Google to back down and revive those people’s accounts, but this episode shows you the power that Google are starting to amass. And what’s more, Google doesn’t have to ban people in such direct fashion. Imagine, for example, that your website is being demoted by Google’s search engine (which nobody knows how it works) simply because you’re talking against Google. Google is allowed by law to do that. So who’s going to stand up and talk smack about Google? Not me, that’s for sure. I love Google.

To sum things up, Google is not required by law to serve everyone, or even to be ‘fair’ in its recommendations about services. And as it gathers more power and becomes more prevalent in our daily lives, we will need to find mechanisms to ensure that Google or Google-equivalent services are provided to everyone, to prevent people being left outside the system, and to enable people to keep being able to speak up against Google and other monopolies.

So in conclusion, it’s going to be a Google world, and I love Google. Now please share this answer, since I’m not sure Google will!

Note: all this is not to say that Google is ‘evil’ or similar nonsense. It is not even unique – if Google takes the fall tomorrow, Amazon, Apple, Facebook or even Snapchat will take its place. This is simply the nature of the world at the moment: digital technologies give rise to big winners. 

Things I’ve Learned as ISIS’ Chief Technology Officer; Or – Why ISIS Loves Trump

A few months ago I received a tempting offer: to become ISIS’ chief technology officer.

How could I refuse?

Before you pick up the phone and call the police, you should know that it was ‘just’ a wargame, initiated and operated by the strategical consulting firm Wikistrat. Many experts on ISIS and the Middle East in general have taken part in the wargame, and have taken roles in some of the sides that are waging war right now on Syrian soil – from Syrian president Bashar al-Assad, to the Western-backed rebels and even ISIS.

This kind of wargames is pretty common in security organizations, in order to understand what the enemy thinks like. As Harper Lee wrote, “You never really understand a man… until you climb into his skin and walk around in it.”

And so, to understand ISIS, I climbed into its skin, and started thinking aloud and discussing with my ISIS teammates what we could do to really overwhelm our enemies.

But who are those enemies?

In one word, everyone.

This is not an overestimate. Abu Bakr al-Baghdadi, the leader of ISIS and its self-proclaimed caliph, has warned Muslims in 2015 that the organization’s war is – “the Muslims’ war altogether. It is the war of every Muslim in every place, and the Islamic State is merely the spearhead in this war.”

Other spiritual authorities who help explain ISIS’ policies to foreigners and potential converts, agree with Baghdadi. The influential Muslim preacher Abu Baraa, has similarly stated that “the world is divided into two camps. Make sure you are on the side of the Muslims. You shouldn’t be on the side of the infidels, nor should you be on the fence, neutral…”

This approach is, of course, quite comfortable for ISIS, since the organization needs to draw as many Muslims as possible to its camp. And so, thinking as ISIS, we realized that we must find a way to turn this seemingly-small conflict of ours into a full-blown religious war: Muslims against everyone else.

Unfortunately, it seems most Muslims around the world do not agree with those ideas.

How could we convince them into accepting the truth of the global religious war?

It was obvious that we needed to create a fracture between the Muslim and Christian world, but world leaders weren’t playing to our tune. The last American president, Barack Obama, fiercely refused to blame Islam for terror attacks, emphasizing that “We are not at war with Islam.”

French president Francois Hollande was even worse for our cause: after an entire summer of terror attacks in France, he still refused to blame Islam. Instead, he instituted a new Foundation for Islam in France, to improve relations with the nation’s Muslim community.

The situation was clearly dire. We needed reinforcements in fighters from Western countries. We needed Muslims to join us, or at the very least rebel against their Western governments, but very few were joining us from Europe. Reports put the number of European Muslims joining ISIS at barely 4,000, out of 19 million Muslims living in Europe. That means just 0.02% of the Muslim population actually cared enough about ISIS to join us!

Things were even worse in the USA, in which, according to the Pew Research Center, Muslims were generally content with their lives. They were just as likely as other Americans to have earned college degrees and attended graduate schools, and to report household incomes of $100,000 or more. Nearly two thirds of Muslims stated that they “do not see a conflict between being a devout Muslim and living in a modern society”. Not much chance to incite a holy war there.

So we agreed on trying the usual things: planning terror attacks, making as much noise as we possibly could, keep on the fight in the Middle East and recruiting Muslims on social media. But we realized that things really needed to change if radical Islam were to have any chance at all. We needed a new kind of world leader: one who would play by our ideas of a global conflict; one who would close borders for Muslims, and make Muslim immigrants feel unwanted in their countries; one who would turn a deaf ear to the plea of refugees, simply because they came from Muslim countries.

After a single week in ISIS, it was clear that the organization desperately need a world leader who thinks and acts like that.

Do you happen to know someone who might fit that bill?

donald_trump_rips_cnn8217s_chris-8ff4094ef9922bde598f212bb5bd485b