Today’s link roundup is a single article, one that hits on a point that’s been bouncing around my head constantly. For decades, we studied human intelligence in the hope that we might mimic our own thought mechanisms in computer code. Instead, as computing power becomes ubiquitous, meshing into a cheap, unified platform, we are discovering modes of thinking in that platform that do not seem to exist in our biology.
Over at Backchannel, David Weinberger posted a piece yesterday on the impact of what he calls “alien knowledge.”
We are increasingly relying on machines that derive conclusions from models that they themselves have created, models that are often beyond human comprehension, models that “think” about the world differently than we do.
But this comes with a price. This infusion of alien intelligence is bringing into question the assumptions embedded in our long Western tradition. We thought knowledge was about finding the order hidden in the chaos. We thought it was about simplifying the world. It looks like we were wrong. Knowing the world may require giving up on understanding it.
Weinberger goes on to describe these models, such as the AI engines that mastered the game, Go. Unlike previous iterations of “smart computing,” it isn’t possible for a human to describe how Google’s Alpha Go program evaluated its moves. Iterative AI engines are programmed to create their own thought models based on the conditions they encounter. What makes this interesting from a political perspective is the potential for AI engines to outclass our own reasoning. The old ‘Turing Test” goal for intelligent computing may be the wrong way to think of AI. Perhaps computers don’t need to mimic us in order to render our biological reasoning moot. Just look at the results of the last election.
If you’ve spent any time on Twitter you may have encountered users that engage in slightly odd behavior. It isn’t always easy to identify a “bot,” a programmed engine for disseminating information (or more often, disinformation) on the platform, but they are ubiquitous. Phony information spread via automated techniques played a powerful role in the last election.
Those bots wouldn’t pass the Turing Test, yet they soundly defeated the human institutions they were programmed to target. We do not possess the computing power necessary to constantly filter a barrage of carefully crafted disinformation. AI engines are already outclassing us in ways that threaten the viability of key institutions.
Since we first started carving notches in sticks, we have used things in the world to help us to know that world. But never before have we relied on things that did not mirror human patterns of reasoning — we knew what each notch represented — and that we could not later check to see how our non-sentient partners in knowing came up with those answers. If knowing has always entailed being able to explain and justify our true beliefs — Plato’s notion, which has persisted for over two thousand years — what are we to make of a new type of knowledge, in which that task of justification is not just difficult or daunting but impossible?
As we look for ways to understand what happened in Election 2016 and prepare for what looms ahead, we should perhaps be thinking less about questions of policy and more about the impact of data overload on our minds. We may be bumping against biological limits in our capabilities, limits that require us to develop new social and technological adaptations to help us cope. A rationalist model of how human beings should best process information may be approaching the end of its evolutionary utility.