Let’s start with a little challenge: which of the following tunes was composed by an AI, and which by an HI (Human Intelligence)?
I’ll tell you at the end of the answer which tune was composed by an AI and which by an HI. For now, if you’re like most people, you’re probably unsure. Both pieces of music are pleasing to the ear. Both have good rhythm. Both could be part of the soundtrack of a Hollywood film, and you would never know that one was composed by an AI.
And this is just the beginning.
In recent years, AI has managed to –
Compose a piece of music (Transits – Into an Abyss) that was performed by the London Symphony Orchestra and received praise from reviewers. [source: you can hear the performance in this link]
Identify emotions in photographs of people, and create an abstract painting that conveys these emotions to the viewer. The AI can even analyze the painting as it is being created, and decide whether it’s achieving its objectives [source: Rise of the Robots].
Create a movie trailer (it’s actually pretty good – watch it here).
Now, don’t get me wrong: most of these achievements don’t even come close to the level of an experienced human artist. But AI has something that humans don’t: it’s capable of training itself on millions of samples, and constantly improve itself. That’s how Alpha Go, the AI that recently wiped the floor with Go’s most proficient players, got so good at the game: it played a few million games against itself, and discovered new strategies and best moves. It acquired an intuition for the game, and kept rapidly evolving to improve itself.
And there’s no reason that AI won’t be able to do that in art as well.
In the next decade, we’ll see AI composing music and even poems, drawing abstract paintings, and writing books and movie scripts. And it’ll get better at it all the time.
So what happens to art, when AI can create it just as easily as human beings do?
For starters, we all benefit. In the future, when you’ll upload your new YouTube clip, you’ll be able to have the AI add original music to it, which will fit the clip perfectly. The AI will also write your autobiography just by going over your Facebook and Gmail history, and if you want – will turn it into a movie script and direct it too. It’ll create new comic books easily and automatically – both the script and the drawing and coloring part – and what’s more, it’ll fit each story to the themes that you like. You want to see Superman fighting the Furry Triple-Breasted Slot Machines of Pandora? You got it.
That’s what happens when you take a task that humans need to invest decades to become really good at, and let computers perform it quickly and efficiently. And as a result, even poor people will be able to have a flock of AI artists at their beck and call.
What Will the Artists Do?
At this point you may ask yourselves what all the human artists will do at that future. Well, the bad news is that obviously, we won’t need as many human artists. The good news is that those few human artists who are left, will make a fortune by leveraging their skills.
Let me explain what I mean by that. Homer is one of the earliest poets we know of. He was probably dirt poor. Why? Because he had to wander from inn to inn, and could only recite his work aloud for audiences of a few dozen people at the time, at most. Shakespeare was much more succesful: he could have his plays performed in front of hundreds of people at the same time. And Justin Bieber is a millionnaire, because he leverages his art with technology: once he produces a great song, everyone gets is immediately via YouTube or by paying for and downloading the song on iTunes.
Great composers will still exist in the future, and they will work at creating new kinds of music – and then having the AI create variations on that theme, and earning revenue from it. Great painters will redefine drawing and painting, and they will teach the AI to paint accordingly. Great script writers will create new styles of stories, whereas the old AI could only produce the ‘old style’.
And of course, every time a new art style is invented, it’ll only take AI a few years – or maybe just a few days – to teach itself that new style. But the human creative, crazy, charismatic artists who created that new style, will have earned the status of artistic super-stars by then: the people who changed our definitions of what is beautiful, ugly, true or false. They will be the people who really create art, instead of just making boring variations on a theme.
The truly best artists, the ones who can change our outlook about life and impact our thinking in completely unexpected ways, will still be here even a hundred years into the future.
Oh, and as for the two tunes? The first one was composed by a human being and performed by Morten Faerestrand in his YouTube clip – 3 JUICY jazz guitar improv tools. The second was composed by the Algorithmic Music Composer and demonstrated in the YouTube clip – Computer-Generated Jazz Improvisation.
Did you get it right?
This post was originally written as an answer at Quora
We hear all around us about the major breakthroughs that await just around the bend: of miraculous cures for cancer, of amazing feats of genetic engineering, of robots that will soon take over the job market. And yet, underneath all the hubbub, there lurk the little stories – the occasional bizarre occurrences that indicate the kind of world we’re going into. One of those recent tales happened at the beginning of this year, and it can provide a few hints about the future. I call it – The Tale of the Little Drone that Could.
Our story begins towards the end of January 2017, when said little drone was launched at Southern Arizona as part of a simple exercise. The drone was part of the Shadow RQ-7Bv2 series, but we’ll just call it Shady from now on. Drones like Shady are usually being used for surveillance by the US army, and should not stray more than 77 miles (120 km) away from their ground-based control station. But Shady had other plans in the mind it didn’t have: as soon as it was launched, all communications were lost between the drone and the control station.
Shady the drone. Source: Department of Defense
Other, more primitive drones, would probably have crashed at around this stage, but Shady was a special drone indeed. You see, Shadow drones enjoy a high level of autonomy. In simpler words, they can stay in the air and keep on performing their mission even if they lose their connection with the operator. The only issue was that Shady didn’t know what its mission was. And as the confused operators on the ground realized at that moment – nobody really had any idea what it was about to do.
Autonomous aerial vehicles are usually programmed to perform certain tasks when they lose communication with their operators. Emergency systems are immediately activated as soon as the drone realizes that it’s all alone, up there in the sky. Some of them circle above a certain point until radio connection is reestablished. Others attempt to land straight away on the ground, or try to return to the point from which they were launched. This, at least, is what the emergency systems should be doing. Except that in Shady’s case, a malfunction happened, and they didn’t.
Or maybe they did.
Some believe that Shady’s memory accidentally contained the coordinates of its former home in a military base in Washington state, and valiantly attempted to come back home. Or maybe it didn’t. These are, obviously, just speculations. It’s entirely possible that the emergency systems simply failed to jump into action, and Shady just kept sailing up in the sky, flying towards the unknown.
Be that as it may, our brave (at least in the sense that it felt no fear) little drone left its frustrated operators behind and headed north. It flew up on the strong winds of that day, and sailed over forests and Native American reservations. Throughout its flight, the authorities kept track over the drone by radar, but after five hours it reached the Rocky Mountains. It should not have been able to pass them, and since the military lost track of its radar signature at that point, everyone just assumed Shady crashed down.
But it didn’t.
Instead, Shady rose higher up in the air, to a height of 12,000 feet (4,000 meters), and glided up and above the Rocky Mountains, in environmental conditions it was not designed for and at distances it was never meant to be employed in. Nonetheless, it kept on buzzing north, undeterred, in a 632 miles journey, until it crashed near Denver. We don’t know the reason for the crash yet, but it’s likely that Shady simply ran out of fuel at about that point.
The Rocky Mountains. Shady crossed them too.
And that is the tale of Shady, the little drone that never thought it could – mainly since it doesn’t have any thinking capabilities at all – but went the distance anyway.
What Does It All Mean?
Shady is just one autonomous robot out of many. Autonomous robots, even limited ones, can perform certain tasks with minimal involvement by a human operator. Shady’s tale is simply a result of a bug in the robot’s operation system. There’s nothing strange in that by itself, since we discover bugs in practically every program we use: the Word program I’m using to write this post occasionally (and rarely, fortunately) gets stuck, or even starts deleting letters and words by itself, for example. These bugs are annoying, but we realize that they’re practically inevitable in programs that are as complex as the ones we use today.
Well, Shady had a bug as well. The only difference between Word and Shady is that the second is a military drone worth $1.5 million USD, and the bug caused it to cross three states and the Rocky Mountains with no human supervision. It can be safely said that we’re all lucky that Shady is normally only used for surveillance, and is thus unarmed. But Shady’s less innocent cousin, the Predator drone, is also being used to attack military targets on the ground, and is thus equipped with two Hellfire anti-tank missiles and with six Griffin Air-to-surface missiles.
A Predator drone firing away.
I rather suspect that we would be less amused by this episode, if one of the armed Predators were to take Shady’s place and sail across America with nobody knowing where it’s going to, or what it’s planning to do once it gets there.
Robots and Urges
I’m sure that the emotionally laden story in the beginning of this post has made some of you laugh, and for a very good reason. Robots have no will of their own. They have no thoughts or self-consciousness. The sophisticated autonomous robots of the present, though, exhibit “urges”. The programmers assimilate into the robots certain urges, which are activated in pre-defined ways.
In many ways, autonomous robots resemble insects. Both are conditioned – by programming or by the structure of their simple neural systems – to act in certain ways, in certain situations. From that viewpoint, insects and autonomous robots both have urges. And while insects are quite complex organisms, they have bugs as well – which is the reason that mosquitos keep flying into the light of electric traps in the night. Their simple urges are incapable of dealing with the new demands placed by modern environment. And if insects can experience bugs in unexpected environments, how much more so for autonomous robots?
Shady’s tale shows what happens when a robot obeys the wrong kind of urges. Such bugs are inevitable in any complex system, but their impact could be disastrous when they occur in autonomous robots – especially of the armed variety that can be found in the battlefield.
Scared? Take Action!
If this revelation scares you as well, you may want to sign the open letter that the Future of Life Institute released around a year and a half ago, against the use of autonomous weapons in war. You won’t be alone out there: more than a thousand AI researchers have already signed that letter.
Will governments be deterred from employing autonomous robots in war? I highly doubt that. We failed to stop even the potentially world-shattering nuclear proliferation, so putting a halt to robotic proliferation doesn’t seem likely. But at least when the next Shady or Freddy the Predator get lost next time, you’ll be able to shake your head in disappointment and mention that you just knew that would happen, that you warned everyone in advance, and nobody listened to you.
And when that happens, you’ll finally know what being a futurist feels like.
I’ve done a lot of writing and research recently about the bright future of AI: that it’ll be able to analyze human emotions, understand social nuances, conduct medical treatments and diagnoses that overshadow the best human physicians, and in general make many human workers redundant and unnecessary.
I still stand behind all of these forecasts, but they are meant for the long term – twenty or thirty years into the future. And so, the question that many people want answered is about the situation at the present. Right here, right now. Luckily, DARPA has decided to provide an answer to that question.
DARPA is one of the most interesting US agencies. It’s dedicated to funding ‘crazy’ projects – ideas that are completely outside the accepted norms and paradigms. It should could as no surprise that DARPA contributed to the establishment of the early internet and the Global Positioning System (GPS), as well as a flurry of other bizarre concepts, such as legged robots, prediction markets, and even self-assembling work tools. Ever since DARPA was first founded, it focused on moonshots and breakthrough initiatives, so it should come as no surprise that it’s also focusing on AI at the moment.
Recently, DARPA’s Information Innovation Office has released a new Youtube clip explaining the state of the art of AI, outlining its capabilities in the present – and considering what it could do in the future. The online magazine Motherboard has described the clip as “Targeting [the] AI hype”, and as being a “necessary viewing”. It’s 16 minutes long, but I’ve condensed its core messages – and my thoughts about them – in this post.
The Three Waves of AI
DARPA distinguishes between three different waves of AI, each with its own capabilities and limitations. Out of the three, the third one is obviously the most exciting, but to understand it properly we’ll need to go through the other two first.
First AI Wave: Handcrafted Knowledge
In the first wave of AI, experts devised algorithms and software according to the knowledge that they themselves possessed, and tried to provide these programs with logical rules that were deciphered and consolidated throughout human history. This approach led to the creation of chess-playing computers, and of deliveries optimization software. Most of the software we’re using today is based on AI of this kind: our Windows operating system, our smartphone apps, and even the traffic lights that allow people to cross the street when they press a button.
Modria is a good example for the way this AI works. Modria was hired in recent years by the Dutch government, to develop an automated tool that will help couples get a divorce with minimal involvement from lawyers. Modria, which specializes in the creation of smart justice systems, took the job and devised an automated system that relies on the knowledge of lawyers and divorce experts.
On Modria’s platform, couples that want to divorce are being asked a series of questions. These could include questions about each parent’s preferences regarding child custody, property distribution and other common issues. After the couple answers the questions, the systems automatically identifies the topics about which they agree or disagree, and tries to direct the discussions and negotiations to reach the optimal outcome for both.
First wave AI systems are usually based on clear and logical rules. The systems examine the most important parameters in every situation they need to solve, and reach a conclusion about the most appropriate action to take in each case. The parameters for each type of situation are identified in advance by human experts. As a result, first wave systems find it difficult to tackle new kinds of situations. They also have a hard time abstracting – taking knowledge and insights derived from certain situations, and applying them to new problems.
To sum it up, first wave AI systems are capable of implementing simple logical rules for well-defined problems, but are incapable of learning, and have a hard time dealing with uncertainty.
Now, some of you readers may at this point shrug and say that this is not artificial intelligence as most people think of. The thing is, our definitions of AI have evolved over the years. If I were to ask a person on the street, thirty years ago, whether Google Maps is an AI software, he wouldn’t have hesitated in his reply: of course it is AI! Google Maps can plan an optimal course to get you to your destination, and even explain in clear speech where you should turn to at each and every junction. And yet, many today see Google Maps’ capabilities as elementary, and require AI to perform much more than that: AI should also take control over the car on the road, develop a conscientious philosophy that will take the passenger’s desires into consideration, and make coffee at the same time.
Well, it turns out that even ‘primitive’ software like Modria’s justice system and Google Maps are fine examples for AI. And indeed, first wave AI systems are being utilized everywhere today.
Second AI Wave: Statistical Learning
In the year 2004, DARPA has opened its first Grand Challenge. Fifteen autonomous vehicles competed at completing a 150 mile course in the Mojave desert. The vehicles relied on first wave AI – i.e. a rule-based AI – and immediately proved just how limited this AI actually is. Every picture taken by the vehicle’s camera, after all, is a new sort of situation that the AI has to deal with!
To say that the vehicles had a hard time handling the course would be an understatement. They could not distinguish between different dark shapes in images, and couldn’t figure out whether it’s a rock, a far-away object, or just a cloud obscuring the sun. As the Grand Challenge deputy program manager had said, some vehicles – “were scared of their own shadow, hallucinating obstacles when they weren’t there.”
The sad result of the first DARPA Grand Challenge
None of the groups managed to complete the entire course, and even the most successful vehicle only got as far as 7.4 miles into the race. It was a complete and utter failure – exactly the kind of research that DARPA loves funding, in the hope that the insights and lessons derived from these early experiments would lead to the creation of more sophisticated systems in the future.
And that is exactly how things went.
One year later, when DARPA opened Grand Challenge 2005, five groups successfully made it to the end of the track. Those groups relied on the second wave of AI: statistical learning. The head of one of the winning groups was immediately snatched up by Google, by the way, and set in charge of developing Google’s autonomous car.
In second wave AI systems, the engineers and programmers don’t bother with teaching precise and exact rules for the systems to follow. Instead, they develop statistical models for certain types of problems, and then ‘train’ these models on many various samples to make them more precise and efficient.
Statistical learning systems are highly successful at understanding the world around them: they can distinguish between two different people or between different vowels. They can learn and adapt themselves to different situations if they’re properly trained. However, unlike first wave systems, they’re limited in their logical capacity: they don’t rely on precise rules, but instead they go for the solutions that “work well enough, usually”.
The poster boy of second wave systems is the concept of artificial neural networks. In artificial neural networks, the data goes through computational layers, each of which processes the data in a different way and transmits it to the next level. By training each of these layers, as well as the complete network, they can be shaped into producing the most accurate results. Oftentimes, the training requires the networks to analyze tens of thousands of data sources to reach even a tiny improvement. But generally speaking, this method provides better results than those achieved by first wave systems in certain fields.
So far, second wave systems have managed to outdo humans at face recognition, at speech transcription, and at identifying animals and objects in pictures. They’re making great leaps forward in translation, and if that’s not enough – they’re starting to control autonomous cars and aerial drones. The success of these systems at such complex tasks leave AI experts aghast, and for a very good reason: we’re not yet quite sure why they actually work.
The Achilles heel of second wave systems is that nobody is certain why they’re working so well. We see artificial neural networks succeed in doing the tasks they’re given, but we don’t understand how they do so. Furthermore, it’s not clear that there actually is a methodology – some kind of a reliance on ground rules – behind artificial neural networks. In some aspects they are indeed much like our brains: we can throw a ball to the air and predict where it’s going to fall, even without calculating Newton’s equations of motion, or even being aware of their existence.
This may not sound like much of a problem at first glance. After all, artificial neural networks seem to be working “well enough”. But Microsoft may not agree with that assessment. The firm has released a bot to social media last year, in an attempt to emulate human writing and make light conversation with youths. The bot, christened as “Tai”, was supposed to replicate the speech patterns of a 19 years old American female youth, and talk with the teenagers in their unique slang. Microsoft figured the youths would love that – and indeed they have. Many of them began pranking Tai: they told her of Hitler and his great success, revealed to her that the 9/11 terror attack was an inside job, and explained in no uncertain terms that immigrants are the ban of the great American nation. And so, a few hours later, Tai began applying her newfound knowledge, claiming live on Twitter that Hitler was a fine guy altogether, and really did nothing wrong.
That was the point when Microsoft’s engineers took Tai down. Her last tweet was that she’s taking a time-out to mull things over. As far as we know, she’s still mulling.
This episode exposed the causality challenge which AI engineers are currently facing. We could predict fairly well how first wave systems would function under certain conditions. But with second wave systems we can no longer easily identify the causality of the system – the exact way in which input is translated into output, and data is used to reach a decision.
All this does not say that artificial neural networks and other second wave AI systems are useless. Far from that. But it’s clear that if we don’t want our AI systems to get all excited about the Nazi dictator, some improvements are in order. We must move on to the next and third wave of AI systems.
Third AI Wave: Contextual Adaptation
In the third wave, the AI systems themselves will construct models that will explain how the world works. In other words, they’ll discover by themselves the logical rules which shape their decision-making process.
Here’s an example. Let’s say that a second wave AI system analyzes the picture below, and decides that it is a cow. How does it explain its conclusion? Quite simply – it doesn’t.
There’s a 87% chance that this is a picture of a cow. Source: Wikipedia
Second wave AI systems can’t really explain their decisions – just as a kid could not have written down Newton’s motion equations just by looking at the movement of a ball through the air. At most, second wave systems could tell us that there is a “87% chance of this being the picture of a cow”.
Third wave AI systems should be able to add some substance to the final conclusion. When a third wave system will ascertain the same picture, it will probably say that since there is a four-legged object in there, there’s a higher chance of this being an animal. And since its surface is white splotched with black, it’s even more likely that this is a cow (or a Dalmatian dog). Since the animal also has udders and hooves, it’s almost certainly a cow. That, assumedly, is what a third wave AI system would say.
Third wave systems will be able to rely on several different statistical models, to reach a more complete understanding of the world. They’ll be able to train themselves – just as Alpha-Go did when it played a million Go games against itself, to identify the commonsense rules it should use. Third wave systems would also be able to take information from several different sources to reach a nuanced and well-explained conclusion. These systems could, for example, extract data from several of our wearable devices, from our smart home, from our car and the city in which we live, and determine our state of health. They’ll even be able to program themselves, and potentially develop abstract thinking.
The only problem is that, as the director of DARPA’s Information Innovation Office says himself, “there’s a whole lot of work to be done to be able to build these systems.”
And this, as far as the DARPA clip is concerned, is the state of the art of AI systems in the past, present and future.
What It All Means
DARPA’s clip does indeed explain the differences between different AI systems, but it does little to assuage the fears of those who urge us to exercise caution in developing AI engines. DARPA does make clear that we’re not even close to developing a ‘Terminator’ AI, but that was never the issue in the first place. Nobody is trying to claim that AI today is sophisticated enough to do all the things it’s supposed to do in a few decades: have a motivation of its own, make moral decisions, and even develop the next generation of AI.
But the fulfillment of the third wave is certainly a major step in that direction.
When third wave AI systems will be able to decipher new models that will improve their function, all on their own, they’ll essentially be able to program new generations of software. When they’ll understand context and the consequences of their actions, they’ll be able to replace most human workers, and possibly all of them. And why they’ll be allowed to reshape the models via which they appraise the world, then they’ll actually be able to reprogram their own motivation.
All of the above won’t happen in the next few years, and certainly won’t come to be achieved in full in the next twenty years. As I explained, no serious AI researcher claims otherwise. The core message by researchers and visionaries who are concerned about the future of AI – people like Steven Hawking, Nick Bostrom, Elon Musk and others – is that we need to start asking right now how to control these third wave AI systems, of the kind that’ll become ubiquitous twenty years from now. When we consider the capabilities of these AI systems, this message does not seem far-fetched.
The Last Wave
The most interesting question for me, which DARPA does not seem to delve on, is what the fourth wave of AI systems would look like. Would it rely on an accurate emulation of the human brain? Or maybe fourth wave systems would exhibit decision making mechanisms that we are incapable of understanding as yet – and which will be developed by the third wave systems?
These questions are left open for us to ponder, to examine and to research.
That’s our task as human beings, at least until third wave systems will go on to do that too.
Eric just shook his head. Something was obviously bothering him, and not even Flatbread Company’s pizza (quite possibly the best pizza in the known universe, or in Rhose Island) could provide him with some peace of mind.
“It’s the bot.” He finally erupted at me. “That damned bot. It’s going to take over my job.”
“You’re a teaching assistant.” I reminded him. “It’s not a real job. You barely have enough money to eat.”
“Well, it’s some kind of a job, at least.” He said bitterly. “And soon it’ll be gone too. I just heard that in Georgia’s Technological Institute they actually managed to have a bot – an artificial intelligence – perform as a teaching assistant, and no one noticed anything strange!”
“Yeah, I remember.” I remembered. “It happened in the last semester. What was the bot’s name again?”
“It’s Jill.” He said. “Jill Watson. It’s based on the same Watson AI engine that IBM developed a few years ago. That Watson can already have debates about current issues, conduct scientific literature reviews, and even provide legal consultation. And now it can even assist students just like a human teaching assistant, and they don’t even note the difference!”
“How can that be?” I tried to understand.
“It all happened in a course about AI, that Prof. Ashok Goel gave in Georgia Tech.” He explained. “Goel realized that the teaching assistants in the course were swamped with questions from students, so he decided to train an artificial intelligence that would help the teaching assistants. The AI went over forty thousand questions, answers and comments written by students and teaching assistants in the course’s forum, and was trained to similarly answer new questions.”
“So how well did it go?” I asked.
“Wonderful. Just wonderful.” He sighed. “The AI, masquerading as Jill Watson, answered students’ questions throughout the semester, and nobody realized that there’s not a human being behind the username. Some students even wanted to nominate ‘her’ as an outstanding teaching assistant.”
“Well, where’s the harm in that?” I asked. “After all, she did lower the work volume for all the human teaching assistants, and the students obviously feel fine about that. So who cares?”
He sent a dirty look my way. “I care – the one who needs a job, even a horrible one like this, to live.” He said. “Just think about it: in a few years, when every course is managed by a bunch of AIs, there won’t be as many jobs open for human teaching assistants. Or maybe not even for teachers!”
“You need to think about this differently.” I advised him. “The positive side is that there’s still place for human teaching assistants, as long as they know how to work with the automated ones. After all, even the best AI in the world, at the moment, doesn’t know how to answer all the questions. There’s still a place for human common sense. So there’s definitely going to be a place for the human teaching assistant, but he’ll just have to be the best as what he does: he’ll need to operate several automated assistants at the same time that will handle the routine questions, and will pass to him only the most bizarre and complex questions; He’ll need to know how to work with computers and AI, but also to have good social skills to solve difficult situations for students; And he’ll need to be reliable enough to do all of the above proficiently over time. So yes, lots of people are going to compete for this one job, but I’m sure you can succeed at it!”
Eric didn’t look convinced. Quite honestly, I wasn’t either.
“Well,” I tried, “you can always switch occupations. For example, you can become a psychologist…”
“There are already companies that provide psychological services on the internet, using text messages.” He said. “Turns out it’s really going well for the patients. You want to bet bots can do this too in a few years? So get ready to wave bye-bye at many of the human psychologists out there.”
“Or maybe you could become an author and write novels…” I tried to continue.
“An AI managed to write a novel this year, and it passed the first round in a Japanese literary competition.” He stated.
“Ok, fine!” I said. “So just sell flowers or something!”
“Facebook is now opening a new bot service, so that people can open an online conversation with them, and order food, flowers and other products.” He said with frustration. “So you see? Nothing left for humans like us.”
“Well,” I thought hard. “There must be some things left for us to do. Like, you see that girl over there at the end of the bar? Cute, isn’t she? Did you notice she was looking at your for the last hour?”
He followed my eyes. “Yes.” He said, and I could hear the gears start turning in his head.
“Think about it.” I continued. “She’s probably interested in you, but doesn’t know how to approach.”
He thought about it. “I bet she doesn’t know what to say to me.”
I nodded.
“She doesn’t know how best to attract my attention.” He went on.
“That’s right!” I said.
“She needs help!” He decided. “And I’m just the guy who can help her. Help everyone!”
He stood up resolutely and went for the exit.
“Where are you going?” I called after him. “She’s right here!”
He turned back to me, and I winced at the sight of his glowing eyes – the sure sign of an engineer at work.
“This problem can definitely be solved using a bot.” He said, and went outside. I could barely hear his muffled voice carrying on behind the door. “And I’m about to do just that!”
I went back to my seat, and raised my glass in what I hoped was a comforting salute to the girl on the other side of the bar. She may not realize it quite yet, but soon bots will be able to replace human beings in yet another role.
The Uber driver was being exceptionally nice to me this morning.
“Nice to meet you, sir!” He greeted me cheerily. “I see you want to get to the university. Please, come on in! Can I offer you a bottle of mineral water? Or maybe some pretzels?”
“Thanks.” I said. I looked at the ceiling. No hidden cameras there. “You’re very nice. Very, very nice.”
“Yes, I know.” His face shone in understanding. “But it pays big time. I get good grades from the customers, so Uber’s algorithm is providing me with even more passengers all the time. It just pays to be nice.”
“Oh, so you’re just like those lawyers, physicians and accountants?”
“I don’t know.” He said. “Am I?”
“Absolutely.” I said. “Or rather, soon they’re going to be a lot like you: just plain nice. The thing is, the knowledge industries – and by that I mean professions which require that human beings go over data and develop insights – are undergoing automation. That means artificial intelligence is going to perform a major part of the work in those professions, and then the human workers – the successful ones, at least – will become nice and more polite to their customers.”
“Take Uber for example.” I gestured at the smartphone at the dashboard. “Taxi drivers partly deal with knowledge generation: they receive information from the passenger about the desired destination, and they have to come up with the knowledge of how to get there, based on their memory of the roads. In the past, a mere decade ago, taxi drivers needed to know the streets of the city like the back of their hand.”
“But today we have GPS.” Said my driver.
“Exactly.” I said. “Today, modern taxi drivers rely on a virtual assistant. It’s not just a GPS that tells you where you are. More advanced apps like Waze and Google Maps also show you how best to reach your destination, with vocal instructions at each step of the way. These virtual assistants allow anyone to be a taxi driver. Even if you never drove in a certain city in the past, you can still do a satisfactory job. In effect, the AI has equalized the playing ground in the field of taxi driving, since it lowered to a minimum the needed skill level. So how can a cabby still distinguish himself and gain an advantage over other drivers?”
“He has to be nice.” Smiled the guy at the wheel. I wondered to myself if he ever stops smiling.
“That’s what we see today.” I agreed. “The passengers are rating every driver according to the experience they had in his cab, since that is the main criteria left when all the others are equal. And Uber is helping the process of selecting for niceness, since they stop working with drivers who aren’t nice enough.”
“But what does it have to do with lawyers, accountants and physicians?” Asked the driver.
“We’re beginning to see a similar process in other knowledge-based professions.” I explained. “For example, just last week a new AI engine made the news: it’s starting to work in a big law firm, as a consultant to lawyers. And no wonder: this AI can read and understand plain English. When asked legal questions, the AI conducts research by going over hundreds of thousands of legal papers and precedents in seconds, and produces a final answers report with detailed explanations about how it has reached each answer. It even learns from experience, so that the more you work with it – the better it becomes.”
“So we won’t even need lawyers in the future?” Finally, the guy’s smile became genuine.
“Well, we may reach that point in the end, but it’ll take quite some time for us to get there.” I said. “And until that time, we’ll see AI engines that will provide free legal consultation online. This kind of a free consultation will suffice for some simple cases, but in the more sophisticated cases people will still want a living lawyer in the flesh, who’ll explain to them how they should act and will represent them in court. But how will people select their lawyers out of the nearly-infinite number of law school graduates out there?”
“According to their skill level.” Suggested the driver.’
“Well, that’s the thing. Everyone’s skills will be near equal. It won’t even matter if the lawyers have a big firm behind them. The size of the firm used to matter because it meant the top lawyers could employ tens of interns to browse through precedents for them. But pretty soon, AI will be able to do that as well. So when all lawyers – or at least most – are equal in skills and performance, the most employed lawyers will be the nice ones. They will be those who treat the customer in the best way possible: they will greet their clients with a smile, offer them a cup of tea when they set for in the office, and will have great conversational skills with which to explain to the client what’s going on in court.”
“And the same will happen with accountants and physicians?” He asked.
“It’s happening right now.” I said. “The work of accountants is becoming easier than ever before because of automation, and so accountants must be nicer than ever before. Soon, we’ll see the same phenomenon in the medical professions as well. When AI can equalize the knowledge level of most physicians, they will be selected according to the way they treat their patients. The patients will flock to the nicer physicians. In fact, the professionals treating the patients won’t even have to have a deep understanding in the field of medicine, just as today’s cabbies don’t need to fully remember the roads in the city. Instead, the medical professionals will have to understand people. They will need to relate to their patients, to figure them out, to find out what’s really bothering them, and to consult with the AI in order to come up with the insights they need in order to solve the patients’ issues.”
“So we gotta keep the niceness on.” Summarized my driver, as he parked the car in front of the entrance to the mall. “And provide the best customer service possible.”
“That’s my best advice right now about work in the future.” I agreed. I opened the door and started getting out of the car, and then hesitated. I turned on my smartphone. “I’m giving you five stars for the ride. Can you give me five too?”
His gaze lingered on me for a long time.
“Sorry.” He finally said. “You talk too much, and really – that’s not very nice.”
History is a story that will never be told fully. So much of the information is lost to the past. So much – almost all – the information is gone, or has never been recorded. We can barely make sense of the present, in which information about the events and the people behind them keeps being released every day. What chance do we have, then, at fully deciphering the complex stories underlying history – the betrayals, the upheavals, the personal stories of the individuals who shaped events?
The answer has to be that we have no way of reaching any certainty about the stories we tell ourselves about our past.
But we do make some efforts.
Medical doctors and historians are trying to make sense of biographies and ancient skeletons, in order to retro-diagnose ancient kings and queens. Occasionally they identify diseases and disorders that were unknown and misunderstood at the time those individuals actually lived. Mummies of ancient pharaohs are x-rayed, and we suddenly have a better understanding of a story that unfolded more than two thousand years ago and realize that the pharaoh Ramesses II suffered from a degenerative spinal condition.
Similarly, geneticists and microbiologists use DNA evidence to end mysteries and find conclusive endings to some historical stories. DNA evidence from bones has allowed us to put to rest the rumors, for example, that the two children of Czar Nicholas II survived the 1918 revolution in Russia.
The Russian czar Nicholas II with his family. DNA evidence now shows conclusively that Anastasia, the youngest daughter, did not survive the mass execution of the family in 1918. Source: Wikipedia
The above examples have something in common: they all require hard work by human experts. The experts need to pore over ancient histories, analyze the data and the evidence, and at the same time have good understanding of the science and medicine of the present.
What happens, though, when we let a computer perform similar analyses in an automatic fashion? How many stories about the past could we resolve then?
We are rapidly making progress towards such achievements. Recently, three authors from Waseda University in Japan have published a new paper showing they can use a computer to colorize old black & white photos. They rely on convolutional neural networks, which are in effect a simulation of certain structures of a biological brain. Convolutional neural networks have a strong capacity for learning, and can thus be trained to perform certain cognitive tasks – like adding color to old photos. While computerized coloring has been developed and used before, the authors’ methodology seems to achieve better results than others before them, with 92.6 percent of the colored images looking natural to users.
Colorized black & white pictures from the past. AI engine was used to add color – essentially new information – to these hints from our past. Source: paper by Iizuka, Simo-Serra and Ishikawa
This is essentially an expert system, an AI engine operating in a way similar to that of the human brain. It studies thousands of thousands of pictures, and then applies its insights to new pictures. Moreover, the system can now go autonomously over every picture ever taken, and add a new layer of information to it.
There are boundaries to the method, of course. Even the best AI engine can miss its mark in cases where the existing information is not sufficient to produce a reliable insight. In the examples below you can see that the AI colored the tent orange rather than blue, since it had no way of knowing what color it was originally.
But will that stay the case forever?
Colorized black & white picture that was colored incorrectly since no information existed about the tent from other sources. Source: paper by Iizuka, Simo-Serra and Ishikawa
As I previously discussed in the Failures of Foresight series of posts on this blog, the Failure of Segregation is making it difficult for us to forecast the future because we’re trying to look at each trend and each piece of evidence on its own. Let’s try to work past that failure, and instead consider what happens when an AI expert coloring system is combined with an AI system that recognizes items like tents and associates them with certain brands, and can even analyze how many tents of each color of that brand were sold on every year – or at least what was the most favorite tent color for people at that time.
When you combine all of those AI engines together, you get a machine that can tell you a highly nuanced story about the past. Much of it is guesswork, obviously, but those are quite educated guesses.
The Artificial Exploration of the Past
In the near future, we’ll use many different kinds of AI expert systems to explore the stories of the past. Some artificial historians will discover cycles in history – princes assassinating their kingly fathers, for example – that have a higher probability to occur, and will analyze ancient stories accordingly. Other artificial historians will compare genealogies, while yet others will analyze ancient scriptures and identify different patterns of writing. In fact, such an algorithm had already been applied to the Bible, revealing that the Torah has been written by several different authors and distinguishing between them.
The artificial exploration of the past is going to add many fascinating details to stories which we’ve long thought were settled and concluded. But it also raises an important question: when our children and children’s children look back at our present and try to derive meaning from it – what will they find out? How complete will their stories of their past and our present be?
I suspect the stories – the actual knowledge and understanding of the order between events – will be even more complete than what we who dwell in the present know about.
Past-Future
In the not-so-far-away future, machines will be used to analyze all of the world’s data from the early 21st century. This is a massive amount of data: 2.5 quintillion bytes of data are created daily, which would fill ten million blu-ray discs altogether. It is astounding to realize that 90 percent of the world’s data today has been created just in the last two years. Human researchers would not be able to make much sense of it, but advanced AI algorithms – a super-intelligence, in some ways – could actually have the tools to crosslink many different pieces of information together to obtain the story of the present: to find out what movies families had watched on a specific day, in which hotel the President of the United States stayed during a recent visit to France and what snacks he ordered on room service, and many other paraphernalia.
Are those details useless? They may seem so to our limited human comprehension, but they will form the basis for the AI engines to better understand the past, and produce better stories of it. When the people of the future will try to understand how World War 3 broke out, their AI historians may actually conclude that it all began with a presidential case of indigestion which happened at a certain French hotel, and which annoyed the American president so much that it had prevented him from making the most rational choices in the next couple of days. An hypothetical scenario, obviously.
Futuronymity – Maintaining Our Privacy from the Future
We are gaining improved tools to explore the past with, and to derive insights and new knowledge even where information is missing. These tools will be improved further in the future, and will be used to analyze our current times – the early 21st century – as well.
What does it mean for you and me?
Most importantly, we should realize that almost every action you take in the virtual world will be scrutinized by your children’s children, probably after your death. Your actions in the virtual world are recorded all the time, and if the documentation survives into the future, then the next generations are going to know all about your browsing habits in the middle of the night. Yes, even though you turned incognito mode on.
This means we need to develop a new concept for privacy: futuronymity (derived from Future and Anonymity) which will obscure our lives from the eyes of future generations. Politicians are always concerned about this kind of privacy, since they know their critical decisions will be considered and analyzed by historians. In the future, common people will find themselves under similar scrutiny by their progenies. If our current hobby is going to psychologists to understand just how our parents ruined us, then the hobby of our grandchildren will be to go to the computer to find out the same.
Do we even have the right to futuronymity? Should we hide from next generations the truth about how their future was formed, and who was responsible?
That question is no longer in the hands of individuals. In the past, private people could’ve just incinerated their hard drives with all the information on them. Today, most of the information is in the hands of corporations and governments. If we want them to dispose of it – if we want any say in which parts they’ll preserve and which will be deleted – we should speak up now.