Eric just shook his head. Something was obviously bothering him, and not even Flatbread Company’s pizza (quite possibly the best pizza in the known universe, or in Rhose Island) could provide him with some peace of mind.
“It’s the bot.” He finally erupted at me. “That damned bot. It’s going to take over my job.”
“You’re a teaching assistant.” I reminded him. “It’s not a real job. You barely have enough money to eat.”
“Well, it’s some kind of a job, at least.” He said bitterly. “And soon it’ll be gone too. I just heard that in Georgia’s Technological Institute they actually managed to have a bot – an artificial intelligence – perform as a teaching assistant, and no one noticed anything strange!”
“Yeah, I remember.” I remembered. “It happened in the last semester. What was the bot’s name again?”
“It’s Jill.” He said. “Jill Watson. It’s based on the same Watson AI engine that IBM developed a few years ago. That Watson can already have debates about current issues, conduct scientific literature reviews, and even provide legal consultation. And now it can even assist students just like a human teaching assistant, and they don’t even note the difference!”
“How can that be?” I tried to understand.
“It all happened in a course about AI, that Prof. Ashok Goel gave in Georgia Tech.” He explained. “Goel realized that the teaching assistants in the course were swamped with questions from students, so he decided to train an artificial intelligence that would help the teaching assistants. The AI went over forty thousand questions, answers and comments written by students and teaching assistants in the course’s forum, and was trained to similarly answer new questions.”
“So how well did it go?” I asked.
“Wonderful. Just wonderful.” He sighed. “The AI, masquerading as Jill Watson, answered students’ questions throughout the semester, and nobody realized that there’s not a human being behind the username. Some students even wanted to nominate ‘her’ as an outstanding teaching assistant.”
“Well, where’s the harm in that?” I asked. “After all, she did lower the work volume for all the human teaching assistants, and the students obviously feel fine about that. So who cares?”
He sent a dirty look my way. “I care – the one who needs a job, even a horrible one like this, to live.” He said. “Just think about it: in a few years, when every course is managed by a bunch of AIs, there won’t be as many jobs open for human teaching assistants. Or maybe not even for teachers!”
“You need to think about this differently.” I advised him. “The positive side is that there’s still place for human teaching assistants, as long as they know how to work with the automated ones. After all, even the best AI in the world, at the moment, doesn’t know how to answer all the questions. There’s still a place for human common sense. So there’s definitely going to be a place for the human teaching assistant, but he’ll just have to be the best as what he does: he’ll need to operate several automated assistants at the same time that will handle the routine questions, and will pass to him only the most bizarre and complex questions; He’ll need to know how to work with computers and AI, but also to have good social skills to solve difficult situations for students; And he’ll need to be reliable enough to do all of the above proficiently over time. So yes, lots of people are going to compete for this one job, but I’m sure you can succeed at it!”
Eric didn’t look convinced. Quite honestly, I wasn’t either.
“Well,” I tried, “you can always switch occupations. For example, you can become a psychologist…”
“There are already companies that provide psychological services on the internet, using text messages.” He said. “Turns out it’s really going well for the patients. You want to bet bots can do this too in a few years? So get ready to wave bye-bye at many of the human psychologists out there.”
“Or maybe you could become an author and write novels…” I tried to continue.
“An AI managed to write a novel this year, and it passed the first round in a Japanese literary competition.” He stated.
“Ok, fine!” I said. “So just sell flowers or something!”
“Facebook is now opening a new bot service, so that people can open an online conversation with them, and order food, flowers and other products.” He said with frustration. “So you see? Nothing left for humans like us.”
“Well,” I thought hard. “There must be some things left for us to do. Like, you see that girl over there at the end of the bar? Cute, isn’t she? Did you notice she was looking at your for the last hour?”
He followed my eyes. “Yes.” He said, and I could hear the gears start turning in his head.
“Think about it.” I continued. “She’s probably interested in you, but doesn’t know how to approach.”
He thought about it. “I bet she doesn’t know what to say to me.”
I nodded.
“She doesn’t know how best to attract my attention.” He went on.
“That’s right!” I said.
“She needs help!” He decided. “And I’m just the guy who can help her. Help everyone!”
He stood up resolutely and went for the exit.
“Where are you going?” I called after him. “She’s right here!”
He turned back to me, and I winced at the sight of his glowing eyes – the sure sign of an engineer at work.
“This problem can definitely be solved using a bot.” He said, and went outside. I could barely hear his muffled voice carrying on behind the door. “And I’m about to do just that!”
I went back to my seat, and raised my glass in what I hoped was a comforting salute to the girl on the other side of the bar. She may not realize it quite yet, but soon bots will be able to replace human beings in yet another role.
Brandon Sanderson is one of my favorite fantasy and science fiction authors. He is producing new books in an incredible pace, and his writing quality does not seem to suffer for it. The first book in his recent sci-fi trilogy, Steelheart from The Reckoners series, was published in September 2013. Calamity, the third and last book in the same series was published in February 2016. So just three years passed between the first and the last book in the series.
The books themselves describe a post-apocalyptic future, around ten years away from us. In the first book, the hero lives in the most technologically advanced cities in the world, with electricity, smartphones, and sophisticated technology at his disposal. Sanderson describes sophisticated weapons used by the police forces in the city, including laser weapons and even mechanized war suits. By the third book, our hero reaches another technologically-advanced outpost of humanity, and suddenly is surrounded by weaponized aerial drones.
You may say that the first city chose not to use aerial drones, but that explanation is a bit sketchy, as anyone who has read the books can testify. Instead, it seems to me that in the three years that passed since the original book was published, aerial drones finally made a large enough impact on the general mindset, that Sanderson could no longer ignore them in his vision of a future. He realized that his readers would look askance at any vision of the future that does not include mention of aerial drones of some kind. In effect, the drones have become part of the way we think about the future. We find it difficult to imagine a future without them.
Usually, our visions of the future change relatively slowly and gradually. In the case of the drones, it seems that within three years they’ve moved from an obscure technological item to a common myth the public shares about the future.
Science fiction, then, can show us what people in the present expect the future to look like. And therein lies its downfall.
Where Science Fiction Fails
Science fiction can be used to help us explore alternative futures, and it does so admirably well. However, best-selling books must reach a wide audience, and to resonate with many on several different levels. In order to do that, the most popular science fiction authors cannot stray too far from our current notions. They cannot let go of our natural intuitions and core feelings: love, hate, the appreciation we have for individuality, and many others. They can explore themes in which the anti-hero, or The Enemy, defy these commonalities that we share in the present. However, if the author wants to write a really popular book, he or she will take care not to forego completely the reality we know.
Of course, many science fiction book are meant for ‘in-house’ audience: for the hard-core sci-fi audience who is eager to think beyond the box of the present. Alastair Reynolds in his Revelation Space series, for example, succeeds in writing sci-fi literature for this audience exactly. He’s writing stories that in many aspects transcend notions of individuality, love and humanity. And he’s paying the price for this transgression as his books (to the best of my knowledge) have yet to appear on the New York Times Best Seller list. Why? As one disgruntled reviewer writes about Reynolds’ book Chasm City –
“I prefer reading a story where I root for the protagonist. After about a third of the way in, I was pretty disturbed by the behavior of pretty much everyone.”
Highly popular sci-fi literature is thus forced to never let go completely of present paradigms, which sadly limits its use as a tool to developing and analyzing far-away futures. On the other hand, it’s conceivable that an annual analysis of the most popular sci-fi books could provide us with an understanding of the public state-of-mind regarding the future.
Of course, there are much easier ways to determine how much hype certain technologies receive in the public sphere. It’s likely that by running data mining algorithms on the content of technological blogs and websites, we would reach better conclusions. Such algorithms can also be run practically every hours of every day. So yeah, that’s probably a more efficient route to figuring out how the public views the future of technology.
But if you’re looking for an excuse to read science fiction novels for a purely academic reason, just remember you found it in this blog post.
I’ve finally had the chance to watch Star Wars – The Force Awakens, and I’m not going to sweeten the deal: It was incredibly mediocre. The director mainly played up on nostalgia value to replace the need for humor, real drama or character development. I’m not saying you shouldn’t watch it – just don’t set your expectations too high.
The really interesting thing in the movie for me, though, was the ongoing Failure of the Paradigm woven throughout the movie. As has often been mentioned in the past, Star Wars is in fact a medieval tale of knights in a shiny armor, a princess in distress (an actual princess! in space!), an evil dark wizard and some father-son unresolved issues. So yeah, we have a civilization that is technologically advanced enough to travel between planets at warp speed without much need for fuel, but we see no similar developments in any other fields: no nano-robots, no human augmentation, no biological warfare, no computer-brain interface, and absolutely no artificial intelligence. And please don’t insult my intelligence by claiming that R2D2 has one.
Star Wars: a medieval space tale of knights and damsels in distress. Image originally from GeekTyrant
The question we should be asking is why. Why would any script writer ignore so many of these potential technological developments – some of which are bound to pop up in the next few decades – and focus instead on plots around which countless other stories have been told and retold throughout thousands of years?
The answer is the Failure of Paradigm: we are stuck in the current paradigm of humanity, love, heroes and free will expressed by biological entities. It takes a superb director and script writer – the Wachowskis’ The Matrix comes to mind – to create an excellent movie that makes you rethink those paradigms. But if you stick with the current paradigms, all you need is an average script, an average director and a lot of explosions to create a blockbuster.
Star Wars is a great example of how NOT to make a science fiction movie. It does not explore the boundaries of what’s possible and impossible in any significant way. It does not make us consider the impact of new technologies, or the changing structure of humanity. It sticks to the old lines and old terms: evil vs. good, empire vs. rebels, father vs. son, and a dashing hero with a bumbling damsel in distress (even though the damsel in the new movie is male). It is not science fiction. Instead, it is a fantasy movie.
And that’s great for some people. Heck, maybe even most people. That’s why it’s the ruling paradigm at the moment – it makes people feel happy and content. But I can’t help thinking and regretting the opportunity lost here. A movie with such a huge audience could make people think. The director could have involved a sophisticated AI in the plot, to make people consider the future of working with artificial virtual assistants. Instead we got a clownish robot. And destroying planets with cannons, requiring immense energy output? What evil empire in its right mind would use such an inefficient method? Why not, instead, just reprogram a single bacteria to create ‘grey goo’ – a self-replicating nano-robot that can devour all humans in its path in order to make more replicas of itself?
The answer is obvious: developments like these would make this fictional world too different from anything we’re willing to accept. In a world of sophisticated risk-calculating AI, there’s not much place for heroics. In a world of nano-technology, there’s no place for wasteful explosions. And in a world with brain-machine interfaces, it is entirely possible that there’s no place for love, biological or otherwise. All of these paradigms that are inherent to us would be gone, and that’s a risk most directors and script writers just aren’t willing to take.
So go – watch the new Star Wars movie, for old time sakes. But after you do that, don’t skimp on some other science fiction movies from the last couple of years that force us to rethink our paradigms. I recommend Chappie and Ex Machina from the last year in particular. These movies may not have the same number of eager followers, and in some cases they are quite disturbing (Chappie only received a rating of 31% in Rotten Tomatoes) – but they will make you think between the explosions. And in the end, isn’t that what we should expect from our science fiction?
When most of us think of the Marine Corps, we usually imagine sturdy soldiers charging headlong into battle, or carefully sniping at an enemy combatant from the tops of buildings. We probably don’t imagine them reading – or writing – science fiction. And yet, that’s exactly what 15 marines are about to do in two weeks from now.
The Marine Corps Warfighting Lab (I bet you didn’t know they have one) and The Atlantic Council are holding a Science Fiction Futures Workshop in early February. And guess what? They’re looking for “young, creative minds”. You probably have to be marines, but even if you aren’t – maybe you’ll have a chance if you submit your application as well.
A week ago I lectured in front of an exceedingly intelligent group of young people in Israel – “The President’s Scientists and Inventors of the Future”, as they’re called. I decided to talk about the future of robotics and their uses in society, and as an introduction to the lecture I tried to dispel a few myths about robots that I’ve heard repeatedly from older audiences. Perhaps not so surprisingly, the kids were just as disenchanted with these myths as I was. All the same, I’m writing the five robot myths here, for all the ‘old’ people (20+ years old) who are not as well acquainted with technology as our kids.
As a side note: I lectured in front of the Israeli teenagers about the future of robotics, even though I’m currently residing in the United States. That’s another thing robots are good for!
I’m lecturing as a tele-presence robot to a group of bright youths in Israel, at the Technion.
First Myth: Robots must be shaped as Humanoids
Ever since Karel Capek’s first play about robots, the general notion in the public was that robots have to resemble humans in their appearance: two legs, two hands and a head with a brain. Fortunately, most sci-fi authors stop at that point and do not add genitalia as well. The idea that robots have to look just like us is, quite frankly, ridiculous and stems from an overt appreciation of our own form.
Today, this myth is being dispelled rapidly. Autonomous vehicles – basically robots designed to travel on the roads – obviously look nothing like human beings. Even telepresence robots manufacturers have despaired of notions about robotic arms and legs, and are producing robots that often look more like a broomstick on wheels. Robotic legs are simply too difficult to operate, too costly in energy, and much too fragile with the materials we have today.
Telepresence robots – no longer shaped like human beings. No arms, no legs, definitely no genitalia. Source: Neurala.
Second Myth: Robots have a Computer for a Brain
This myth is interesting in that it’s both true and false. Obviously, robots today are operated by artificial intelligence run on a computer. However, the artificial intelligence itself is vastly different from the simple and rules-dependent ones we’ve had in the past. The state-of-the-art AI engines are based on artificial neural networks: basically a very simple simulation of a small part of a biological brain.
The big breakthrough with artificial neural network came about when Andrew Ng and other researchers in the field showed they could use cheap graphical processing units (GPUs) to run sophisticated simulations of artificial neural networks. Suddenly, artificial neural networks appeared everywhere, for a fraction of their previous price. Today, all the major IT companies are using them, including Google, Facebook, Baidu and others.
Although artificial neural networks were reserved for IT in recent years, they are beginning to direct robot activity as well. By employing artificial neural networks, robots can start making sense of their surroundings, and can even be trained for new tasks by watching human beings do them instead of being programmed manually. In effect, robots employing this new technology can be thought of as having (exceedingly) rudimentary biological brains, and in the next decade can be expected to reach an intelligence level similar to that of a dog or a chimpanzee. We will be able to train them for new tasks simply by instructing them verbally, or even showing them what we mean.
This video clip shows how an artificial neural network AI can ‘solve’ new situations and learn from games, until it gets to a point where it’s better than any human player.
Admittedly, the companies using artificial neural networks today are operating large clusters of GPUs that take up plenty of space and energy to operate. Such clusters cannot be easily placed in a robot’s ‘head’, or wherever its brain is supposed to be. However, this problem is easily solved when the third myth is dispelled.
Third Myth: Robots as Individual Units
This is yet another myth that we see very often in sci-fi. The Terminator, Asimov’s robots, R2D2 – those are all autonomous and individual units, operating by themselves without any connection to The Cloud. Which is hardly surprising, considering there was no information Cloud – or even widely available internet – back in the day when those tales and scripts were written.
Robots in the near future will function much more like a team of ants, than as individual units. Any piece of information that one robot acquires and deems important, will be uploaded to the main servers, analyzed and shared with the other robots as needed. Robots will, in effect, learn from each other in a process that will increase their intelligence, experience and knowledge exponentially over time. Indeed, shared learning will result in an acceleration of AI development rate, since the more robots we have in society – the smarter they will become. And the smarter they will become – the more we will want to assimilate them in our daily lives.
The Tesla cars are a good example for this sort of mutual learning and knowledge sharing. In the words of Elon Musk, Tesla’s CEO –
“The whole Tesla fleet operates as a network. When one car learns something, they all learn it.”
Elon Musk and the Tesla Model X: the cars that learn from each other. Source: AP and Business Insider.
Fourth Myth: Robots can’t make Moral Decisions
In my experience, many people still adhere to this myth, under the belief that robots do not have consciousness, and thus cannot make moral decisions. This is a false correlation: I can easily program an autonomous vehicle to stop before hitting human beings on the road, even without the vehicle enjoying any kind of consciousness. Moral behavior, in this case, is the product of programming.
Things get complicated when we realize that autonomous vehicles, in particular, will have to make novel moral decisions that no human being was ever required to make in the past. What should an autonomous vehicle do, for example, when it loses control over its brakes, and finds itself rushing to collision with a man crossing the road? Obviously, it should veer to the side of the road and hit the wall. But what should it do if it calculates that its ‘driver’ will be killed as a result of the collision into the wall? Who is more important in this case? And what happens if two people cross the road instead of one? What if one of those people is a pregnant woman?
These questions demonstrate that it is hardly enough to program an autonomous vehicle for specific encounters. Rather, we need to program into it (or train it to obey) a set of moral rules – heuristics – according to which the robot will interpret any new occurrence and reach a decision accordingly.
And so, robots must make moral decisions.
Conclusion
As I wrote in the beginning of this post, the youth and the ‘techies’ are already aware of how out-of-date these myths are. Nobody as yet, though, knows where the new capabilities of robots will take us when they are combined together. What will our society look like, when robots are everywhere, sharing their intelligence, learning from everything they see and hear, and making moral decisions not from an individual unit perception (as we human beings do), but from an overarching perception spanning insights and data from millions of units at the same time?
This is the way we are heading to – a super-intelligence composed of a combination of incredibly sophisticated AI, with robots as its eyes, ears and fingertips. It’s a frightening future, to be sure. How could we possibly control such a super-intelligence?
That’s a topic for a future post. In the meantime, let me know if there are any other myths about robots you think it’s time to ditch!
Two weeks ago it was “Back to the Future Day”. More specifically, Doc and Marty McFly reached the future at exactly October 21st, 2015 in the second movie in the series. Me being a futurist, I was invited to several television and radio talk shows to discuss the shape of things to come, which is pretty ridiculous, considering that the future is always about to come, and we should talk about it every day, and not just in a day arbitrarily chosen by the scriptwriters of a popular movie.
All the same, I’ll admit I had an uplifting feeling. On October 21st, everybody was talking about the future. That made me realize something about science fiction: we really need it. Not just for the technological ideas that it gives us (like cellular phones and Tricorders from Star Trek), but also for the expanded view of the future that it provides us with.
Sci-fi movies and book take root in our culture, and establish a longing and an expectation to a well-defined future. In that way, sci-fi creations provide us with a valuable social tool: a radically prolonged Cycle-time, which is the length of time an individual in society tends to look forward to and plan for in advance.
Cycle-times in the Past
As human beings, and as living organisms in general, mother evolution has shaped us into fulfilling one main goal: transferring our genes to our descendants. We are, in a paraphrase of Richard Dawkins’ quote, trucks that carry the load of our genes into the future, as far as possible from our current starting point. It is curious realize that in order to preserve our genes into the future, we must be almost totally aware of the present. A prehistorical person who was not always on the alert for encroaching wolves, lions and tigers, would not have survived very long. Millions of years of evolution have designed living organisms so that they focus almost entirely on the present.
And so, for the first few tens of thousands years of human existence, we ran away from the tigers and chased after the deer, with a very short cycle-time, probably lasting less than a day.
It is difficult, if not impossible, to know when exactly we managed to strike a bargain with Grandfather Time. Such a bargain provided the early humans great power, and all they needed to do in return was to measure and document the passing of hours and days. I believe that we’ve started measuring time quite early in human history, since time measurement brought power, and power ensured survivability and the passing of genes and time measurement methodologies to the next generation.
The first cycle-time was probably quite short, lasting less than a full day. Early humans could roughly calculate how long it will take the sun to set according to its position in the sky, and so they could know when to start or end a hunt before darkness fell. Their cycle-time was a single day. The woman who wanted to know her upcoming menstruation period – which could lead to drawing predators and making it more difficult for her to hunt – could do that by looking at the moon, and by making a mark on a stick every night. Her cycle-time was a full month.
The great leap forward occurred in agricultural civilizations, which were based on an understanding of the cyclical nature of time: a farmer must know the cyclical order of the seasons of the year, and realize their significance for his field and crops. Without looking ahead a full year into the future, agricultural civilizations could not reach their full height. And so, ten thousand years ago, the first agricultural civilizations set a cycle-time of a whole year.
And that is pretty much the way it remained ever since that time.
One of the most ancient cycle-times, and the most common one as well: the seasons of the year.
Religious Cycle-times
Religions initially had the potential to provide longer cycle-times. The clergies have often documented history and made an attempt to forecast the future – usually by creating or establishing complex mythologies. Judaism has prolonged the agricultural cycle-time, for example, by setting a seven year cycle of tending one’s field: six years of growing corps, and a seventh year (Shmita, in Hebrew) in which the fields are allowed to rest.
“For six years you are to sow your fields and harvest the crops, but during the seventh year let the land lie unplowed and unused.” – Exodus, 23, 10-11.
Most of the religious promises for the future, however, were usually vague, useless or even harmful. In his book, The Clock of the Long Now, Stewart Brand repeats an old joke that caricaturizes with more than a shred of truth the difficulties of the Abrahamic religions (i.e. Judaism, Christianity and Islam) in dealing with the future and creating useful cycle-times in the minds of their followers. “Judaism,” writes Brand, “says [that] the Messiah is going to come, and that’s the end of history. Christianity says [that] the Messiah is going to come back, and that’s the end of history. Islam says [that] the Messiah came, and history is irrelevant.” [the quote has been slightly modified for brevity]
While this is obviously a joke, it reflects a deeper truth: that religions (and cultures) tend to focus on a single momentous future, and ignore anything else that comes along. Worse, the vision of the future they give us is largely unhelpful since its veracity cannot be verified, and nobody is willing to set an actual date for the coming of the Messiah. Thus, followers of the Abrahamic religions continue their journey into the future, with their eyes covered with opaque glasses that have only one tiny hole to let the light in – and that hole is in the shape of the Messiah.
Religious futile cycle-time: everybody is waiting for the Messiah, who will come sometime, at some place, somehow.
Why We Need Longer Cycle-times
When civilizations fail to consider the future in long cycle-times, they head towards inevitable failure and catastrophe. Jared Diamond illustrates this point time and time again in his masterpiece Collapse, in which he reviews several extinct civilizations, and the various ways in which they failed to adapt to their environment or plan ahead.
Diamond describes how the Easter Island folks did not think in cycle-times of trees and earth and soil, but instead thought in human shorter cycle-times. They greedily cut down too many of the trees in the island, and over several decades they squandered the island’s natural resources. Similarly, the settlers in Greenland could not think in a cycle-time long enough to contain the grasslands and the changing climate, and were forced to evacuate the island or freeze to death, after their goats and cattle damaged Greenland’s delicate ecology.
The agricultural civilizations, as I wrote earlier, tend to think by nature in cycle-times no longer than several years, and find it difficult to adjust their thinking into longer cycle-times: ones that apply to trees, earth and evolution of animal (and human) evolution. As a result, agricultural civilizations damage all of the above, disrupt their environment, and eventually disintegrate and collapse when their surroundings can’t support them anymore.
If we wish to keep humanity in existence overtime, we must switch to thinking in longer cycle-times that span decades and centuries. This is not to say that we should plan too far ahead – it’s always dangerous to forecast into the long-term – but we should constantly attempt to consider the consequences of our doings in the far-away future. We should always think of our children and grandchildren as we make steps that could determine their fate several decades away from now.
But how can we implement such long-term cycle-times into human culture?
If you still remember where I began this article, you probably realize the answer by now. In order to create cycle-times that last decades and centuries, we need to visit the future again and again in our imagination. We need to compare our achievements in the present to our expectations and visions of the future. This is, in effect, the end-result of science fiction movies and books: the best and most popular of them create new cycle-times that become entwined in human culture, and make us examine ourselves in the present, in the light of the future.
Movie Time
Science fiction movies and stories have an impressive capability to influence social consciousness. Karel Capek’s theater play R.U.R. from 1920, for example, had not only added the word “Robot” to the English lexicon, but has also infected western society with the fear that robots will take over mankind – just as they did in Capek’s play. Another influential movie, The Terminator, was released in 1984 and has solidified and consolidated that fear.
Science fictions does not have to make us fear the future, though. In Japanese culture, the cartoon robot Astro-Boy has become a national symbol in 1952, and ever since that time the Japanese are much more open and accepting towards robots.
Astro Boy: the science fiction series that made Japanese view robots much more warmly than the West.
The most influential science fiction creations are those that include dates, which in effect are forecasts for certain futures. These forecasts provide us with cycle-times that we can use to anchor our thinking whenever we contemplate the future. When the year 1984 has come, journalists all over the world tried to analyze society and see whether George Orwell’s dark and dystopian dream had actually come true. When October 21st 2015 was reached barely two weeks ago, I was interviewed almost all day long about the technological and societal forecasts made in Back to the Future. And when the year 2029 will finally come – the year in which Skynet is supposed to be controlling humanity according to The Terminator – I confidently forecast that numerous robotics experts will find themselves invited to talk shows and other media events.
As a result of the above science fiction creations, and many others, humanity is beginning to enjoy new and ambitious cycle-times: we look forward in our mind’s eye towards well-designated future dates, and examine whether our apocalyptic or utopian visions for them have actually come true. And what a journey into the future that is! The most humble cycle-times in science fiction span several decades ahead. The more grandiose ones leap forward to the year 2364 (Star Trek), 2800 (Dan Simmons’ Hyperion Cantos) or even to the end of the universe and back again (in Isaac Asimov’s short story The Last Question).
The longest cycle-times of science fiction – those dealing with thousands or even millions of years ahead – may not be particularly relevant for us. The shorter cycle-times of decades and centuries, however, receive immediate attention from society, and thus have an influence on the way we conduct ourselves in the present.
Conclusion
Humanity has great need of new cycle-times that will be far longer than any that were established in its history. While policy makers attempt to take into account forecasts that span decades ahead, the public is generally not exposed or influenced by such reports. Instead, the cycle-times of many citizens are calibrated according to popular science fiction creations.
Hopefully, those longer cycle-times would allow humanity to prepare in advance to existential longer-term challenges, such as ecological catastrophes or social collapse. At the very same time, longer cycle-times can also encourage and push forward innovation in certain areas, as entrepreneurs and innovators struggle to fulfill the prophecies that were made for certain technological developments in the future (just think of all the clunky hoverboards that were invented towards 2015 as proof).
In short, if you want to save the future, just write science fiction!