There’s something special about the matrix of lies that determined the 2016 Election. It’s normal for politicians and their advocates to obscure the truth. Our system evolved to absorb much of the impact of these distortions. What made 2016 unique wasn’t dishonesty, it was a new infrastructure of reality distortion unlike anything in prior experience.
Data analytics, AI, and concentrated interference by wealthy interests and foreign governments produced a propaganda juggernaut the Soviets would have envied. As our computing technology grows more powerful, there may be nothing we can do to limit the influence of “reality enhancement” on our politics. It is unclear how democracy, and the humanistic values on which it’s based, will weather this storm.
Lies, conspiracy theories and simple ignorance have always pulled at the fringes of democracy. Facing the Catholic Democrat Al Smith in 1928, Herbert Hoover’s campaign fanned one of America’s all-time great conspiracy theories. They convinced many fearful Protestants that Smith would build a tunnel from Washington to the Vatican, enabling a Catholic takeover of America.
Stories like this could be viewed as a form of folklore. The story was organic. It took no special brilliance to identify the power of anti-Catholic bias to mobilize American voters. In a simpler civilization with less concentrated power, we could survive the relatively weaker falsehoods and myths of hand-made, artisanal propaganda. That tunnel story had little if any influence on the 1928 election.
Even 20th century totalitarian states lacked the lie-power that we face. Their weakness was their dependence on centralization and control. A single rogue radio signal could pose an existential threat. Forced to choose a singular narrative and impose it uniformly across an entire population, their engines of disinformation were relatively brittle. They could only preserve their power by staving off complexity, an exercise that left them, in relative terms, artificially poor, artificially weak, and eventually vulnerable to systemic collapse.
What distinguishes our emerging ecosystem of lies from the past is their volume, sophistication and reach. AI-driven engines of misinformation turned loose in the past few years are more than a mere megaphone. A new generation of data-driven propaganda tools can evolve on their own, learning from our feedback which messages are most successful, refining and amplifying those lies. Using AI platforms, even simple ones available for free download, I don’t need to know anything about American racism to exploit American racism. These engines can mine our biases, even the ones we hide, and construct lies to exploit them.
Thanks to obstruction by Republicans, we’ve been slow to gain insights on the 2016 disinformation campaign, but some details have emerged. Disinformation engines under development on the right since the Bush II years explored new techniques. An existing rightwing infrastructure of fake news factories like Breitbart, phony think tanks, and old-fashioned propaganda tools like Fox News were paired with new, far more sophisticated tactics.
Organizations like Cambridge Analytica explored the use of social media algorithms to manufacture content tailored to individual biases. They don’t appear to have been successful, but the Russians were. With an entire state security project dedicated to the American election, a foreign propaganda infrastructure including outlets like RT and Sputnik, and a long-established hacking infrastructure already prepared to leverage common AI and big data tools, they were a powerhouse.
Strengthened by intelligence gleaned from their American collaborators and data stolen via hacking, the Russians were able to craft particularly potent lies. They could spread those lies through a powerful propaganda engine, leveraging AI tools like their Twitter bots and phony Facebook accounts to hone the lies, pouring energy into the most successful conspiracy theories and distortions.
Computerized firepower directed at our collective limbic systems was so powerful, they got a guy elected president who you wouldn’t trust to walk your dog. Sure, blame the Russians and the unpatriotic Republicans who collaborated with them, but don’t miss the bigger picture. Breaking up the Russian/Republican project might end that particular campaign, but it won’t disarm this cheap, readily available weapon, just waiting to be used again.
We live in a world that’s growing ever more hostile to unaided, unenhanced human biology. Our machines operate on a speed and scale not merely beyond our capacity, but in some ways hazardous to our health. To date, we’ve leveraged evolutionary tools like Valium and Prozac to help us perform under accelerating demands. We’ve engineered social tools like mass public education to manufacture a human population better adapted to an emerging economy. We’ve created social democracy to buy ourselves time to adapt to declining human value. We’re learning to use chemicals like Ritalin to enhance our performance. New genetic tools like CRISPR offer the potential to modify our core biology, deliberately engineering more adaptable traits. Despite these adaptations, the median human being is steadily losing ground as the capabilities of our synthetic environment grow more powerful by the day.
What happens to democracy when no human can beat a machine at chess, or when wars are fought without soldiers? What does democracy mean when the material success of a civilization depends more on healthy computers than on healthy people?
China is giving us a peek at a potential future, in which algorithms do much of the work once performed by bureaucrats, police, and politicians. Credit ratings are a new development so powerful and sweeping that we forget we once lived without them. Credit already dictates much of American life without any form of central control. Introduce tiny tweaks from political leaders, and a credit rating becomes a new form of government, independent of most human intervention.
“Social credit ratings” now determine a Chinese citizen’s capacity to engage in almost any common activity, including travel. Imagine Yelp for people, where your compliance with the law is a component of your rating, and that’s a social credit rating.
This may sound extreme, but the only real difference between the Chinese system and ours is that our system is lagging behind. Cameras now monitor almost everything anyone does in a major American city. Law enforcement would be practically inconceivable without mass surveillance and mass data collection. Between your credit rating, your feedback on dating sites, your LinkedIn account, and social media history, algorithms already determine much of your access to the glories of a modern economy and a modern social fabric. Adding government compliance is just a single, perhaps unnecessary layer of data on an already thick digital cake. The Chinese are learning to govern without governors, leveraging some of the same tools used to wreck our democracy to promote order and stability, however Orwellian (or Huxleyian), that order might be.
When humans invented and spread the scientific method, they unleashed a force that would destroy civilizations premised on god-based religions. Humanism evolved out of hundreds of years of mayhem to rule a long era. One could argue, as Yuval Noah Harari has in his book Homo Deus, that the digital age has birthed a new evolutionary development displacing Humanism as our highest, most adaptive value.
I’m not convinced that Hariri has the right answers, but in light of democracy’s growing dysfunction, he is certainly asking the essential questions. Election 2016 taught us the superior power of the algorithm over reason. Finding a way to preserve human meaning and value in a data state is the most consequential dilemma we face.
In other words, what comes after Humanism, and the political order it spawned? Does humanism survive in an world of algorithms, and in what forms? How do we preserve an order premised on human value when the economic importance of each ordinary human being wanes beyond a critical mass. The Chinese have programmed their values into the algorithms governing their social order. Can the same be done with humanistic values?
Algorithms defeated democracy in 2016. Will we be ready for the next round of this fight? Should we, in fact, be fighting this trend at all, or should we be looking for ways to leverage this power toward the preservation of human values?