What is intelligence?
What does it mean to understand the world around us?
What is it about humans that makes us so special?
For so many people these are abstract, philosophical questions, but as progress continues to rocket forward in the field of AI, these questions have come into a crisp, unsettling focus.
For as long as the field of AI has existed, the AI effect has moved in tandem with it. After every advance in AI a vocal contingent snakes up from the pseudo-academic depths to shout: “That’s not real intelligence!”
It’s hard to say that they are right, because “real intelligence” has never been a well-defined term, but it’s also hard to say that they are wrong for exactly the same reason. In the past, this problem was largely immaterial because a couple of interns in a lab could easily create measurable experiments that showed a massive gap between human and machine capabilities.
The problem is that it is increasingly difficult to create these experiments.
So difficult in fact that an entire swath of the AI community has risen up against the very concept of measurable progress. They believe that empiricism is an ugly word, and believe above all else that despite the mountains of evidence, deep learning is not an effective tool for AI.
It feels like every year of progress drives us deeper into the throes of Moravec’s paradox. It turns out that all of those things that make humans feel intelligent, like chess and video games and mathematical proofs, are simple. Instead, it’s those intuitive leaps of faith - the way your foot juts out when you stumble - that are complex.
If you believe that the goal of AI is to replicate human intelligence, it’s easy to feel like the human spirit is stuck on a sinking island. Every new benchmark, sloughing off a new wing of sand.
It’s a terrifying enough experience that it’s no wonder that these traditionalists will scream and shout about how the island will never fall, that removing sand can never cause the island to fall into the ocean. You can shut your eyes and imagine that the sand still stretches out as far as it did a decade ago, but as long as you believe the goal of AI is to replicate humanity the relationship will be adversarial. Thus, it is the people that hold this belief that tend to vocally speak out against deep learning.
That’s not to say that they are simply arguing that we continue to explore other methods. That’s a point that no serious researcher disagrees with. They argue instead that the whole of deep learning’s achievements ought to be discarded. That by deigning to measure the result, we have insulted the honor of human intelligence. The vague argument then presented is that we should throw out all of deep learning, and all the empirical evidence, and the datasets along with it. Not because these techniques are not useful, or even because they are specific to deep learning. It’s more that the old guard are demanding a penance paid in blood, and we ought to sacrifice decades of progress in the name of their ego’s.
It is, indeed, a difficult point to take seriously.
Macroscopically, the problem is that “understanding” and “intelligence” have no objective meaning. There is no universally agreed-upon definition, let alone an effective test. If we have learned nothing else from the pandemic, we have all learned that it is imminently possible for one person to see intelligence and understanding where another sees ignorance and confusion.
Some then jump to the notion that this is evidence of human’s faulty wiring. That there is some platonic ideal of intelligence, just that most humans haven’t managed to achieve it. The implicit notion is always that the data scientist really believes that they themselves should be the sole arbiter of truth in the world. Any technique that does not allow them to do this is dangerous, and they pretend instead to have some true academic disagreement. Only it’s one that they can never put into words.
At the core of this worldview is the notion that intelligence moves in a strict linear Great Chain of Being from ants to salamanders to other people to the holy data scientist. That if and when they are to build AI that is truly intelligent, it would be able to simply resolve all of that messy human mush with the cold salvation of logic.
It is this belief that I argue against most fervently. It is here that I take up the decades old mantle of Hubert Dreyfus, echo the fallacy in his Four Assumptions of AI Research and say that human intelligence will never be reduced to rules and taxonomies. That it is not a question of simply writing more and better rules, but that the primacy of fuzziness in deep learning is fundamental to its success.
At its core, intelligence is a messy thing. No mosaic of logic and wires is more tangled than the human mind. Deep learning adds a set of tools to the AI architect’s tool belt that let us impose structure and constraints on that messiness. It is a method of programming with data that is capable of replicating specific cognitive behaviors to a near arbitrary degree.
It’s now that we’ve come across a fork in the road: is the goal intelligence, or human mimicry? Years ago, when AI research was in a relative infancy, the two targets were very far away and in roughly the same direction. Today, this is no longer the case. We’re capable of mimicking intelligent human behavior to such a high degree of precision that researchers today are forced to constantly contort their training targets to challenge these master mimics.
We’re forced now, to choose from two alternate paths.
Shall we build cyborgs, or bionic arms?
Is the goal to copy the human wholesale? To encode all our hatred and imperfections into silicon? Would we dare to trudge down that fraught path of some data scientists’ concept of a more perfect human, built in our own image?
Or, is the goal instead to improve the human condition? Can we instead build intelligent tools that let us accomplish things that we never could have done on our own? Do we have an obligation to build tools that empower their end users, rather than ones that replace them?
Personally? I think that the last several years have humbled me profoundly. While I will continue to live my life in awe, marveling at the amazing system of human intelligence, I no longer believe that our goal should be to emulate it completely. I think that there are potent lessons to be taken from human intelligence in our designs of AI systems, but I also believe that a well-engineered human/AI system will always outperform one or the other for tasks difficult enough to be interesting.