The history of technology is full of predictions that now sound laughable. One of the most famous examples is credited to Bill Gates, who in 1981 is claimed to have said, “640 kilobytes ought to be enough for anybody.” Predictions about artificial intelligence are no different; early AI researchers assumed that we’d have a robot that walked, talked, and thought like a human within decades. Of course, despite some awe-inspiring achievements in machine learning, AI still isn’t there yet. That’s because, according to a principle known as Moravec’s paradox, we can teach machines to solve the hard problems, but it’s the easy ones that are difficult.
In 1957, economist and computer science pioneer Herbert Simon famously said, “It is not my aim to surprise or shock you — but the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until — in a visible future — the range of problems they can handle will be coextensive with the range to which human mind has been applied.”
Simon died in 2001, and his “visible future” of a world where machines can think like humans is still a long way off. Sure, artificial intelligence has proven itself when tasked with specific problems, like categorizing distant galaxies or imitating celebrity voices or creating works of art, but simply thinking — a concept known as general artificial intelligence — seems to stymie the most advanced machine learning systems. Heck, even walking on two feet is a challenge for machines. They might be able to beat a grand champion at chess, but they can’t beat a toddler at picking the right toy off the shelf.
This isn’t a new problem. In the 1980s, computer scientist Hans Moravec laid out this exact challenge — what has now been dubbed “Moravec’s paradox” — and explained why it’s just what we should expect from machines that are immune to the pressures of natural selection. “Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it,” he wrote in his 1988 book “Mind Children.” “The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge.”
That is to say, the things humans find easiest are the very things that took millennia of evolution to refine. The things humans find hardest are only hard because they’re new — we’ve been thinking about chess strategy for a little over a thousand years, but we’ve been learning how to interact with our surroundings since our ancestors were single-celled organisms. The skills that are hardwired through evolution don’t take conscious thought, and when you don’t have to think about something, it’s harder to figure out how to teach a machine to do it.
So how do you teach a machine to truly think? Moravec thinks the answer lies in the very thing the machines lack: evolution. “I feel that the fastest progress can be made by imitating the evolution of animal minds, by striving to add capabilities to machines a few at a time, so that the resulting sequence of machine behaviors resembles the capabilities of animals with increasingly complex nervous systems,” he wrote. Even at that time, however, things were improving. “Programs which tackle incremental problems similar to those that faced early animals — how to deal with, and even to anticipate, the sudden surprises, dangers, and opportunities encountered by an exploring organism — are being written and tested in robots that have to face the uncertainties of the real world.”
That progress continues today. Engineers are teaching artificial intelligence algorithms to be exploratory by having them play video games, for example. But it’s a two-way street: Before we can teach machines to think like humans, we need to more fully understand how humans think, and understanding the limitations of machine learning can answer questions about how our minds truly work. Engineers designing neural networks can learn from neuroscientists and vice versa.
It’s possible that, as Steven Pinker predicts, the jobs in most danger of being overtaken by artificial intelligence are “the stock analysts and petrochemical engineers and parole board members,” and it’s the blue-collar workers like cooks and gardeners that are the ones with job security. It’s also possible that the paradox means AI will never be truly independent and will always rely on the help of human users. But in either case, we should all take a moment to appreciate the supercomputers working inside of our skulls. They make the most difficult tasks in the world look easy.