This incessant surveillance is antidemocratic, and it’s also a loser’s game. The price of accurate intel increases asymptotically; there’s no way to know everything about natural systems, forcing guesses and assumptions; and just when a complete picture is starting to coalesce, some new player intrudes and changes the situational dynamic. Then the AI breaks. The near-perfect intelligence veers into psychosis, labeling dogs as pineapples, treating innocents as wanted fugitives, and barreling eighteen-wheelers into kindergarten busses that it sees as highway overpasses.
The dangerous fragility inherent to optimization is why the human brain did not, itself, evolve to be an optimizer. The human brain is data-light: It draws hypotheses from a few data points. And it never strives for 100 percent accuracy. It’s content to muck along at the threshold of functionality. If it can survive by being right 1 percent of the time, that’s all the accuracy it needs.
The brain’s strategy of minimal viability is a notorious source of cognitive biases that can have damaging consequences: close-mindedness, conclusion jumping, recklessness, fatalism, panic. Which is why AI’s rigorously data-driven method can help illuminate our blindspots and debunk our prejudices. But in counterbalancing our brain’s computational shortcomings, we don’t want to stray into the greater problem of overcorrection. There can be enormous practical upside to a good enough mentality: It wards off perfectionism’s destructive mental effects, including stress, worry, intolerance, envy, dissatisfaction, exhaustion, and self-judgment. A less-neurotic brain has helped our species thrive in life’s punch and wobble, which demands workable plans that can be flexed, via feedback, on the fly.
These antifragile neural benefits can all be translated into AI. Instead of pursuing faster machine-learners that crunch ever-vaster piles of data, we can focus on making AI more tolerant of bad information, user variance, and environmental turmoil. That AI would exchange near-perfection for consistent adequacy, upping reliability and operational range while sacrificing nothing essential. It would suck less energy, haywire less randomly, and place less psychological burdens on its mortal users. It would, in short, possess more of the earthly virtue known as common sense.
Here’s three specs for how.
Building AI to Brave Ambiguity
Five hundred years ago, Niccolò Machiavelli, the guru of practicality, pointed out that worldly success requires a counterintuitive kind of courage: the heart to venture beyond what we know with certainty. Life, after all, is too fickle to permit total knowledge, and the more that we obsess over ideal answers, the more that we hamper ourselves with lost initiative. So, the smarter strategy is to concentrate on intel that can be rapidly acquired—and to advance boldly in the absence of the rest. Much of that absent knowledge will prove unnecessary, anyway; life will bend in a different direction than we anticipate, resolving our ignorance by rendering it irrelevant.
We can teach AI to operate this same way by flipping our current approach to ambiguity. Right now, when a Natural Language Processor encounters a word—suit—that could denote multiple things—an article of clothing or a legal action—it devotes itself to analyzing ever greater chunks of correlated information in an effort to pinpoint the word’s exact meaning.