When it comes to the future of artificial intelligence, we seem to be stuck in a loop. We tell the same stories about A.I. over and over again: society is destroyed (the “Terminator” movies), the machines emulate and replace us (“Ex Machina”), the machines become gods pulling the strings (“The Matrix”). This is a dangerous way to think about A.I., because the stories we tell influence the decisions we make about how such systems should operate.
None of the three scenarios I just described represents a future in which we would want to live. They are all trajectories in which superintelligent machines simply leave us behind. If we spend all of our time looking over our shoulders for killer robots, that means we are not looking ahead to discern the outcomes we might actually want. A study of A.I. representations in film and television by Christopher Noessel underscores the problem: We have lots of stories about the power and the duplicitous nature of A.I., but almost none exploring what he calls the “Untold A.I.” themes: accountability, effective policy and broad literacy around these technologies.
To thrive in the era of intelligent machines, we need to expand our thinking. Instead of worrying about godlike super-machines, we should tell better stories about all the everyday ways A.I. is already changing the world.
When you ask Alexa to tell you a joke or turn on a light switch across the room, it requires “a vast planetary network” of algorithms and machines to make that bit of magic happen. Commercial airplanes are steered by automated piloting systems for nearly the entire flight, and companies like Waymo are deploying autonomous cars, buses and trucks. Machine learning provides live translation on Skype calls, and similar techniques are filtering your email spam and curating your social media feeds.
The real A.I. future is already here, but the stories we watch in darkened theaters are nightmares about a distant future that might never arrive.
Today’s everyday A.I. is often invisible, hidden behind cloud-based services that “just work” for consumers. Often the boundary between automation and intelligence is blurry: Does the neural network that distinguishes ZIP codes for the Postal Service or the thousand-page algorithm that U.P.S. uses to route its delivery trucks count as artificial intelligence? In practical terms, every time we cede agency and decision-making to a machine, we are endorsing the intelligence, or the competence, of an algorithm to do some thinking for us. It turns out that machines don’t have to be superhuman or superintelligent to wield power over us.
Of course, the risk that machines will be more intelligent, or more competent, than humans is real, and already visible in arenas like chess and Go. But instead of treating that as a zero-sum competition, we could adopt a frame of collaboration.
When Garry Kasparov was defeated by IBM’s Deep Blue in 1997, his response was to start exploring how humans and machines might play chess as teammates rather than opponents. He helped start the “Advanced Chess” movement, where humans and machines form collaborative teams, sometimes called “centaurs.” The best of these centaur teams outperform top chess-playing machines as well as human grandmasters.
The most successful A.I. systems out there today are dependent on teams of humans, just as the humans depend on those systems to provide insights and perform tasks beyond their own abilities. Image-processing A.I. can outperform human radiologists at spotting tumors in X-rays, if medical personnel get patients in front of the right machine and ask the right questions. But teams of human doctors will be vital to marrying technology and empathy for the effective treatment of complex diseases.
As A.I. becomes more capable, we will start to see signs of real collaboration, which requires a level of trust akin to that of human partners in a complex environment. Our lives are full of cultural work that requires this kind of give and take, which depends in turn on accessibility, literacy and accountability. Most of the truly beautiful things humans create emerge from that kind of multi-agent process. As we start to work with A.I., we will need to find ways to collaborate with it.
So how do we go about expanding our thinking?
One way is to recognize the diversity of work already going on that gets drowned out by the loudest and most powerful voices talking about A.I. It is also important to consider who is in the room when A.I. systems are designed, and how easy it is for those human architects to embed their biases in the systems they build. Additional voices need to be at the table, people from groups underrepresented in Silicon Valley-style innovation, especially women, people of color, and the economically disadvantaged. Bringing more diversity into the design process will improve the outcomes of technologists who seek to design products and solve problems for “everyone.”
One great example of bringing diversity into A.I. is the Global A.I. Narratives project from Cambridge University’s Leverhulme Center for the Future of Intelligence, which seeks to understand what stories we are telling about the future of A.I. outside the Anglophone West.
Another kind of diversity lies in our rich cultural archives. People have been dreaming and writing about A.I. for centuries. Inspired by the work of Mr. Noessel, the Center for the Future of Intelligence and others, I am helping to lead A.I. Policy Futures, a project at Arizona State University and the New America Foundation. Our first goal is to create a taxonomy of A.I. in science fiction literature and film. We hope this will give us a broader view of the possibilities of A.I. by resurrecting good ideas that we have collectively forgotten, while also highlighting the gaps in our collective thinking.
The best way forward is to commit to the goal of thinking more holistically about A.I. and then orchestrate activities and conversations to advance it. When you bring together science fiction writers, technologists and policymakers, you create a feedback loop in which interesting things tend to happen. People ask one another simple yet surprisingly profound questions. Leaders in these different fields occasionally realize they have been working with impoverished conceptions of what particular words or ideas really mean. New plans and stories get hatched. From the Science Fiction Advisory Council at the X-Prize Foundation to the value that entities like NASA have attached to science fiction, there is growing recognition in the power and potential of this feedback loop.
A.I. is too interesting, too ubiquitous, and too poorly defined to be left to Hollywood mega-franchises or the same old cultural shorthand we’ve been using for 60 years. The thinking machines we should be talking about are not super-intelligences or replicants but rather the just-smart-enough technologies already changing our lives. How will we deal with A.I. that is not alien or omniscient but just a few steps ahead of us, like a good tennis partner?
We have to tell new stories about “little” A.I.: machines that aren’t trying to take over the world but just get the job done. Most important, we need more stories about real people in these futures, and how we will adapt to a reality of where we share intelligence, work and creativity with our machines.
Ed Finn, the author of “What Algorithms Want: Imagination in the Age of Computing” and co-editor of “Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds,” is the founding director of the Center for Science and the Imagination at Arizona State University, where he is an associate professor.