More gruel
The Biological Problem Behind Politics

The Biological Problem Behind Politics

We seem to be living through a remarkable political transition. All of our previous forms of government are descending into a disarray. A few decades ago, regimes built around unquestioned central planning collapsed in a heap. From that experience we concluded that decentralized, democratic organization was our highest state of existence, our End of History. Now those decentralized entities are sputtering, their dysfunction driven by the same fundamental problems of collective decision-making that destroyed their old rivals.

As we contemplate what form of social structure will best allow us to pool our resources toward common goals, it might be wise to revisit the most fundamental factors at work in politics. Why do we have politics? Why do we need government? What’s the point of our investment in public life and collaboration?

Politics grants rational agency to a structure that lacks a brain. Through politics, we breathe spirit into an entity drawn from our imaginations, lacking any tangible existence, granting it a capacity to reason and take deliberate, coordinated action like a person. This social adaptation allows us to take advantage of the wealth and security we derive from living in communities larger than kinship groups. Politics allows us to live in cities.

Why bother? Human beings can grow stronger and smarter by bringing more people and their brains together toward common goals. Cooperation lets us solve problems that would be impossible for us to address individually, or in small kinship groups. The first of those problems was mass agriculture. Out of the wealth of mass agriculture came cities. Out of cities comes the cauldron of technical adaptations that converted us from shaved apes to a thriving, dominant species.

There is no politics in a band of hunter-gatherers. Our English word, politics, comes from the Greek word for city, polis. Politics evolved because city life grants us tremendous evolutionary success, but it is a highly unnatural condition for humans. Urbanity makes demands on our bodies and minds that our biology has not adapted to bear. Despite the wealth and security of urban existence, something inside us still craves patterns of life we evolved to enjoy over hundreds of thousands of years. We build artificial forests and savannas within our cities to help us retain our sanity in this unnatural landscape. As individuals, we struggle to thrive in cities. And we chafe under the politics that urban life demands.  

Human prosperity grows with the size of our circle of cooperation, so we experience an incentive to form larger and larger collective units. But there’s a catch. The larger that circle of minds, the clunkier the executive process becomes. And the more people we try to encompass in that circle, the lower our capacity to incorporate their interests. A larger political unit produces more potential success but brings with it a social friction in the form of reduced decision-making efficiency and declining empathy. Have you ever heard a city described as “cold?” That same coldness of large groups makes politics feel threatening and distant.

Our capacity for coordination in large groups is hampered by our limitations in cognition and empathy. First, the volume of data and calculation required to effectively manage a collection of human beings increases exponentially as the group grows, quickly overwhelming the capabilities of a single person, or even an assembly of people. The more people in a group, the greater the complexity.

Second, our ability to feel beyond our nerve endings is quite limited. We compensate with empathy, an adaption that lets us conceptualize the sensations of people whose biological feedback (nerve-endings, etc) we do not experience. However, our capacity for caring seems to naturally reach to roughly the size of a kinship group, perhaps as few as 150 people (“Dunbar’s Number”). Beyond that circle, our capacity to experience of the emotions of others drops off precipitously absent some socially engineered extensions.

In other words, with more people in a collective, the harder it becomes to make competent choices. As they say, too many cooks spoil the soup. One cook thinks we’re making French onion, another wants gumbo, and the next is aiming for tomato basil. The outcome is inedible.

The original solution to the problem of large-scale human organization was to choose a king. In many early cultures, those kings started out being elected, or at least chosen by a collection of powerful people. That king would operate with unquestioned power, using his brain as the brain of the collective. It worked to solve the primary concern of our earliest human governments – organizing the defense of their investments in settled agriculture or capturing the land and investments of others. Output from settled agriculture made the first cities, and their politics, possible. Kingship allowed large groups to coordinate their activities better than previous orders, but it did nothing to extend the limits of our rulers’ empathy.

Very quickly all of the world’s richest agricultural land came to be controlled by groups of people organized under monarchies. There was a cost to this success. By investing all of the group’s executive decision-making in a single brain, they improved their capacity to dominate the landscape and protect their agricultural investments from thieves or vandals. They also gained the ability to harness collective labor and initiative toward relatively simple collective goals, like building irrigation systems or public edifices.

A single brain in a single body may make rapid, coherent decisions for the group, but it has no biological means to feel what that group is experiencing. Empathy, which bridges that limitation, only extends so far. Monarchies survived under constant pressure from the misery they inflicted on the masses whose interests and experiences never made it into the limited calculus of the ruling machinery.

This is the central dilemma of politics from its inception into the present day. Government exists at constant tension with human biological limitations. Our ability to think or feel in large groups is too primitive to match our collective ambitions.

We’ve learned how to create governments powerful enough to steer the actions of large groups of people, but we struggle to make those governments smart enough, or responsive enough to the vast span of our experience, to thrive beyond certain limits. Our feedback mechanisms remain too primitive, and the cognitive capacity of those governing entities is too modest, to process the wealth of data and decision-making a large collective demands.

Kings or queens or presidents only feel their own pain. Their nerves don’t extend into your fingers. If their choices grant them rewards while worsening your condition, then the only biological signals they experience may tell them they have succeeded. Distribute power among many people and the same limits remain. Power broadly enough distributed to operate within the limits of our empathy, undermines decision-making. Concentrate power sufficiently to enable decisions to made, and those decisions will lack a grasp of the complexity of the issues, and a compassionate appreciation of those decisions’ impact.

What results eventually is a governing system in which those who succeed in obtaining some power use that power to further their own biological interests. This dynamic exists in democracies as readily as dictatorships, because the underlying biological incentives remain consistent regardless of the system.

Philosophers have been wrestling with this problem for ages, from Socrates’ guardians to Locke’s social contract. Politics is the story of our social and technological evolution struggling to extend beyond our innate biological limits.

Clues to a more effective social order might be found in nature, where other, much simpler organisms have evolved far more effective systems for collaboration. Our failure so far to recognize their potential, and how to mimic them, has been influenced by our misunderstanding of how they work. Once we see how they work, then the starker problem comes into view. Biological adaptations necessary for thriving large colonies is missing in humans. We have to manufacture them out of social and technological adaptations.

Suggest to someone that a human society should function more like a colony of ants or a beehive, and you’ll hear a predictable objection. No one wants to be a mindless drone, enslaved to the will of a single “queen.” Asked to compare hive organisms to human societies, most people would say that they resemble communism or feudalism, where individuals live in mindless service to a master.

What’s funny about this characterization is that it’s utterly backward. There is no executive authority in an anthill. Ant colonies don’t reason, decide or plan. Every ant, like every bee, operates in an autonomous manner, carrying out its own biological programming for its own rewards. These creatures have evolved such that each individual reaps rewards for behavior that serves the larger good of the whole. A single ant climbing a leaf in your garden is looking out for #1 every bit as much as a Wall Street trader. That ant’s colony is improved by the individual’s reward-seeking behavior.

An anthill seems to think in a manner similar to our autonomic brain functions, through a neural net composed of many independent nodes sharing signals and genetically programmed responses. A nerve cell in your arm is not “subservient” to the cells in your brain. That cell operates with no knowledge that a brain exists. My nerves operate in a network, sharing outcomes and goals through a biologically encoded reward structure. Our concept of individual rationality has some weaknesses that impair our understanding of our world. From Stuart Russell in his book about AI safety, Human Compatible:

Another critique of the theory of rationality lies in the identification of the locus of decision making. That is, what things count as agents? It might seem obvious that humans are agents, but what about families, tribes, corporations, cultures, and nation-states? If we examine social insects such as ants, does it make sense to consider a single ant as an intelligent agent, or does the intelligence really lie in the colony as a whole, with a kind of composite brain made up of multiple ant brans and bodies that are interconnected by pheromone signaling instead of electrical signaling?

From an evolutionary point of view, this may be a more productive way of thinking about ants, since the ants in a given colony are typically closely related. As individuals, ants and other social insects seem to lack an instinct for self-preservation as distinct from colony preservation: they will always throw themselves into battle against invaders, even at suicidal odds. Yet, sometimes humans will do the same even to defend unrelated humans; it is as if the species benefits from the presence of some fraction of individuals who are willing to sacrifice themselves in battle, or to go off on wild, speculative voyages of exploration, or to nurture the offspring of others. In such cases, an analysis of rationality that focuses entirely on the individual is clearly missing something essential.

Our age-old characterization of hive animals brainlessly carrying out orders from a commander toward someone else’s benefit, is an example of humans seeing nature through the lens of our experience. Hive animals evolved, at the level of DNA, a rewards matrix that serves both themselves and the hive. Hive animals function far more like market participants than like proletarians in grey jumpsuits.

When people look down on a city from the top of a skyscraper, how often do they describe what they see in ant-metaphors? People are “scurrying around like ants.” They see movement and structures that look to us like an anthill. That isn’t an accident. We lag behind these creatures in the sophistication of our communal living thanks to an evolutionary branch that diverted energy toward developing large individual brains. In time, that investment might pay off in biological groups with overwhelming success. For now, though, ants remain the planet’s most successful animal species.

In human societies, as in the natural world, adapting to sustain successful large societies probably depends less on creating a central brain than on evolving a decentralized matrix of rewards for behavior consistent with collective life. The problem is that we share none of the biology which brings ant or bee behavior into line with shared needs. We developed politics instead of hives, because the behaviors we’re most drawn toward as individuals are incompatible with the growth of large, successful collectives. Human societies that have grown most wealthy and powerful always contained a police power you won’t find in an anthill.

What emerges from decline of democracy and socialism might be some adaptation, based on computer technology, that merges elements of market liberalism with the executive coherence of 20th century socialism. Prototypes of this order have already taken shape in some smaller nation-states like Denmark and Holland. The greatest weaknesses of this order are our two persistent biological limitations – our inability to sense each other’s needs beyond the reach of kinship, and the simple cognitive challenge of processing mass data. A government big enough to encompass a vast nation is too big to care about me personally, and it produces too many decisions for a central authority to competently execute.

Thanks to technology, we might already have the capacity to solve that second, data-processing and calculation challenge. What may emerge next is a government with the power to plan and control a massive population, which yet retains the weakness of our inherent empathy. As the Chinese continue to tinker with AI, we may be seeing this new form of government emerge with troubling consequences.

Our Chinese cousins may be building the machine for effective human planning that escaped earlier totalitarian regimes, but toward whose interests? If someone solves the computational problem of controlling a vast nation-state without somehow extending the biological limitations of human empathy, what horrors might that resultant state inflict? If the Chinese experiment with AI produces the wealth and power one might expect, then we may have a problem on our hands.

As our technological evolution promises new extensions of political power, our innate biological limitations threaten to skew those efforts toward frightening outcomes. Any competent step forward in our political development should be taken in humble acknowledgement of our biological needs and limits.


This post is part of a series exploring what’s next after liberal democracy and what we should do to prepare. Much of this material was covered in The Politics of Crazy, though from the perspective of a more optimistic era. The work fits better as a whole, but reading through a 6000+ word piece on a computer seems impractical. When these are complete I’ll gather them into a series of links on a single page.


  1. I agree with you about the biological limits. But let me pose the question a different way: the need for decentralization is due to the inability to gather and process centralized information. The *side effect*, is that it also tends to keep the leaders closer and more empathetic / understanding of the people. So when comparing 2 systems, the question is: 1) Does the deficiency of information processing, in system A, actually lead to a commensurate gain in empathy and conversely 2) does the efficiency of centralization in system B actually lead to a commensurate decline in empathy? And at the end, which system strikes the right balance and proves more evolutionarily successful? This sounds theoretical, so let me give a few examples, starting with the extremes:

    1) Cain and Abel were brothers. Just about the smallest unit of organization you can have. One was a farmer and one was a shepard, i.e. they were actually organized. Yet the vastly decentralized information system they had did not lead to a commensurate increase in empathy. IOW, just because a system is decentralized doesn’t always mean it’s empathetic. Our human nature is not always bent toward cooperation and caring, even among close family members, much less other social groups. There needs to be something more.

    2) To bring up a previous point I made, Disney is hugely centralized, across multiple continents, integrating disparate populations who don’t even speak the same language into a single culture (and then sells them stuff :-). They also have massively effective data analytics, collecting everything from what you watch on their cable channels, what you buy in their stores, which of their movies you buy tickets for, how long you wait for their rides, etc. And yet they’re hugely empathetic. Disney’s CEO has less in common with an average kid in Hong Kong than does the Chinese President. Yet I’d argue (and I bet the Hong Kong kid would agree 🙂 that Disney is far more attuned to his life. Disney often knows what beats in the heart of a 5 year old better than his own parents do. They do cultural assimilation far better than any nation state; has a GDP higher than most of them; manages massive, centralized information flows well; owns and controls worldwide communication channels more effectively than the KGB; and does it all in a way that leaves most of their customers (from 6 month old babies in Alabama to 80 year old grandparents in Mongolia) with a smile on their face. It can be done. What does the Mouse House know that governments don’t?

    Looking at actual governments, for a long time, decentralized govts succeeded because centralized govts didn’t have the information to really be superior, and the benefits of increased empathy (even if minimal), were enough to favor decentralized govts. But I do think this is changing. It may be that totalitarian govts of the past failed not because the theory was any worse than capitalism, but because it simply lacked the information flows that its centralized structures required. We can longer afford to accept the myth that our system of “freedom” succeeded because of any intrinsic reasons that democracy is somehow better, rather than due to external limitations (now rapidly going away) that increased our chances of success.

    In the real world, there’s actually been a 70 year experiment running that I think proves your point: as devastating as the British occupation of India was, India was in better shape afterwards than China was after Mao’s Cultural revolution. Not only was India spared the massacres, but even per capita GDP was (somewhat) higher. They both even faced their own partitions (India / Pakistan, China / Taiwan). Clearly, China’s centralized systems didn’t really benefit their country at that time, and India could claim to at least be more empathetic to its citizens. Both countries continued to grow slowly over the next few decades, neither one really taking a decisive lead. India even flirts with a democratic form of communism (it’s the only democracy that allows the communist party to exist, and they are a major regional power, capturing control of various states from time to time).

    Both of them decided to embark on free market economic reforms at about the same time (China about 10 years earlier, which, in fairness, some people argue accounts for all the difference). Things really got going only after the fall of the Soviet Union. And the results are staggering. China is now about 2-3x India’s GDP and continues to grow faster than India. Both started from similar backgrounds. But China timed its conversion (by skill or luck, I’m not sure) just when centralized information flows were exploding. Meanwhile, even as India’s democracy has gotten more responsive over the past several decades — and even more empathetic, now that it’s no longer under 1-party rule by the Congress Party — which *has* sparked a rapidly improving quality of life for its citizens, it’s been nowhere near the same progress as China.

    I’m not really sure where I’m going with all of this except to say that the correlation you make between decentralized vs centralized systems and empathy vs not is not an automatic one. I don’t think you’re saying it is, either. So maybe what I’m really saying is I hope your next post is a dissection of Disney and why it’s now their world and we’re just living in it, and maybe that’s not a bad thing (I’m waiting for my baby Yoda plush toy to arrive just like everyone else on this Earth 🙂 ).

    1. As usual, you’re anticipating the next three posts on the subject.

      There’s nothing inherently un-empathetic about AI. It just depends on the priorities programmed into its learning model. And, consistent with a long-running theme here at the Political Orphans blog, corporations are building AI systems and applications that incorporate far more humane traits than what we’re seeing from governments.

      China’s AI experiment isn’t cruel and horrifying because AI is cruel and horrifying. Those are the priorities being built into the model. China is instrumenting what we might think of as digital Fascism, where massive parallel processing and machine learning fill the frustrating computational gap that hamstrung earlier totalitarian regimes.

      Thing is, whoever breaks through first with a sufficiently successful AI-driven political system will inherit a first-mover advantage that will be hard to stop. And success will be measured by power, not friendliness or happiness or the life-experiences of those trapped beneath it. If a soul-crushing AI-empowered regime can produce a 10x improvement in wealth over a decade and a half or so, something that might actually be too modest a prediction, then it might not matter than no one likes it.

      Point being, maybe it would be smart for governments that retain at least a modest concern for human thriving to notice what China (and Disney) are doing well, and figure out how to use it to replace their decaying processes.

      And maybe that’s impossible. Maybe the reason this is happening in China instead of Germany or the US is that democracy makes these massive evolutionary leaps too burdened by caution and competing interests to be possible.

    2. EJ

      As someone who works with machine learning algorithms, I would caution you to be careful about being optimistic about them. ML (and to an even greater extent techniques like deep learning and neural networks) is a useful tool to solve certain very specific problems. Outside of those circumstances it is worse in every way than the judgement of a human, although it may be cheaper in the short term.

      The danger with AI, in my professional opinion, is not that it will eat the world. The danger is that we are so eager to use it that we will attempt to force human society to become more amenable to ML solutions: more abstracted, less unique, less complicated, easier to predict.

      Attempting to make a society easier to measure in order to better control it is not a new idea. Prussia was doing this in the 17th century. France was doing it in the early 20th century. The Soviet Union was built on it and died by it. The American James C. Scott wrote an excellent book on the subject. As he concluded, this doesn’t work. At best, it survives for as long as coercive force can be applied to people to make them conform to it; but if a state has that much coercive force then it doesn’t need machine learning.

      1. You have a point. And that’s likely to occur in the more authoritarian regimes. For societies already acclimated to a large degree of popular participation, I think the expansion of AI might go down differently.

        Here’s what I’m picturing for free societies.

        Whenever a city needs to change its zoning rules, or the EPA wants to make a rule change, they have to initiate a “study.” In almost every case, that so-called study is just a contract with a collection of outside consultants who perform the cognitive work that’s beyond the time and means of the elected representatives.

        They gather data. They calculate results from that data, and 18 months later present a summary of that data with conclusions. I’ve never seen one of these that couldn’t be automated down to a few days, or in some cases about 35 seconds, with algorithms and data lakes that are already available. They are almost always merely large-scale calculations, with parameters (the things the city values) being fed to the consultants by elected representatives.

        I’m not imagining some massive, sci-fi general AI that replaces all human cognition, at least not in the west. In present-day democracies, I’m picturing us choosing to replace un-transparent, corrupt, and frankly quite weak human intelligences with more dependable, accountable and powerful digital calculations wherever we can. The same thing businesses are already doing on a large scale.

        At work, we don’t ever try to make serious corporate decisions solely on the basis of calculations or opinions coming from people. In the ‘data-driven enterprise’ everything has a calculus, and those calculations are not analyzed by people. Only the output, spewed out by computer applications and increasingly analyzed by ML programs, is evaluated by people. To make business decisions based solely on the amount of data and calculation available to humans would be commercial suicide.

        Where things could go terribly wrong is in policing.

      2. EJ

        That’s an interesting point. This is the technocratic dream, isn’t it? An impartial, data-driven governance system which tells people how to achieve their stated outcome in the best way possible.

        The fight then becomes over which data is considered relevant to the outcome and which is considered irrelevant; and once again we have a human decision.

        For example, let us suppose we’re looking at an urban redevelopment scheme. Do we consider the long-term carbon emission data to be relevant? If so, we would almost certainly exclude cars entirely. Therefore, people who like cars would press for this data to be excluded from the model. I would be surprised if various lobby groups did not push strongly for “their” technical solution to be adopted.

        To be clear I’m not against technology aiding us in decision making – I chose to work in that field, after all. However, my experience is that for many decision makers, the most valuable thing about ML and other data-driven approaches is not that it gives answers, but that it gives a fig-leaf of objectivity to the decisions that they were going to make anyway. This then becomes a human-politics issue and we’re back to square one.

      3. EJ

        It’s harder to challenge because it’s less transparent to the disempowered, for one thing.

        I take your point though: AI is not revolutionary. Rather, it empowers those who are already powerful to continue doing what they’re already doing.

      4. EJ

        I have, although not as much as you; of course they’ve dissimulated but the reasons behind it have normally been fairly easy to discern with a little digging – in my locale it’s pressure from developers about sixty percent of the time.

        You’ve also done work with analytics I believe; in your experience do you think it’s easier or harder for bad-faith actors to hide stuff in there than in normal decision making?

  2. As interesting as your historical review of how political systems “evolve”, I feel as though we are once more trying to explain something flawed but good that is being destroyed by one very self-serving, evil individual, enabled by people who otherwise have demonstrated more respect for democratic norms and institutions. Obviously, the failure of the GOP to act responsibly is a reflection of America’s populace who has enabled them through their failure to hold them accountable, if not “led them” to this level of disrespect.

    Jonah Goldberg offers his opinion in the National Review (yes I read this for perspective) about how this situation as devolved into such a cesspool of disgusting, spineless enablers. Your piece looks holistically at the purpose politics serves – for good and bad – Goldberg suggests there have been many forks in the road for this president. He fails to accord enough responsibility to the Republicans in Congress, but, one can ask only so much of a NR piece.

      1. Gould hits it so clearly. It seems to me that much of Western humankind has spent most of its existence trying to elevate itself above nature instead of figuring out how to live well within its confines.

        The idea of living on Mars after planet Earth is dirtied up by humans just makes me sick.

      2. “ Gould hits it so clearly. It seems to me that much of Western humankind has spent most of its existence trying to elevate itself above nature instead of figuring out how to live well within its confines.”

        This is one major area where I think Christianity, in particular the conservative flavor, leads people astray. Mike Huckabee once said that he was a conservationist, as opposed to an environmentalist, because he believed that Nature served mankind. He could not be more wrong. Nature serves no one. It does not play favorites. If a species pushes things out of balance, sooner or later Nature pushes back. We are going to get pushback for all that long sequestered carbon we’re dumping back into the atmosphere, the only question is how severe will it be. It’s definitely yet another sign of the universe isn’t fair, that the first people to feel the effects will be the people with the least responsibility for the problem. I suspect GOP voters aren’t going to get the hint until much of Florida is under water.

        My saying on this:” God may love you, but Mother Nature couldn’t care less.”

    1. There’s an interesting theory for why we haven’t yet found evidence of advanced life out there. After all, given so many planets and stars, there have to be *tons* of them out there. And even if we can’t go out there, surely some of them should be visiting us.

      The theory goes that the reason there are so few, is that once a lifeform progresses to the intelligence / organizational / innovational level of being able to explore the planets, it inevitably also develops enough weapons technology to wipe itself out. Which it does fairly soon because the temptation is too much. At which point, life goes back to the bacterial stage (or maybe the cockroach stage if we’re lucky and only talking about primitive weapons like our nuclear bombs 🙂 ). So an advanced civilization that’s able to live for thousands or millions of years *after* it reaches the stage of advanced, world-destroying weaponry, is almost impossible.

      1. That’s quite possible (and also very depressing). There’s also the hypothesis that Calvin said to Hobbs, that proof of intelligent life was the fact that no one was trying to make first contact with Earth! If I’m an observer from a hypothetical civilization that managed to survive to do interstellar travel, I’d be recommending a no fly zone around this solar system! The inhabitants have a nasty habit of preying on themselves, and they’re trashing their only planet- don’t want that getting out into the rest of the galaxy.

        The Fermi paradox is interesting and though-provoking and a bit depressing, but also frustrating in the sense of how little we know. We have no inkling just how common life is out in the universe. We scan for signs of intelligence based on what we have done- there’s no guarantee we’re doing the right sort of search. I love the though of exploring the stars, but we’re not worthy of that when we can’t manage just one planet.

      2. I know that it is technologically not possible to prove, but imagine if by some way incontrovertible proof could be found that Venus had a highly advanced technological civilization which existed there over a billion years ago.

        That would drive the concept trying to stop runaway Global Warming quite a bit.
        BTW, there is more than a non-zero chance life DID exist on Venus, and for a long long time. But that is not going to happen.

        This cushy existence mankind has on this planet is coming to an end and right quick. (less than 2 generations). People are too stupid and selfish to do what is necessary, and the positive feedback loop of all the methane being released in the melting permafrost in Alaska, Canada, and Russia is just starting to crank up. Once Global Warming REALLY gets going, the resulting wars over water and arable land will also do a number on the biosphere.

        Mankind will survive. We are like cockroaches. But the biosphere will so radically change, no way it will support the vast majority of existing species, and no way it will support 7.4 billion humans.

      3. wet bulb indices rise above what even the young and fit can endure

        We’re held hostage by Christians and those who visit Davos once a year.

        Would this be true in a true in a true democracy instead of the pseudo democracy we enjoy in the USA (where we equate capitalism with a system of government)?

      4. EJ

        Looking at what’s happening in Australia right now, I am not certain that any amount of evidence is enough. As long as money has a greater than zero influence in politics – and how can it not – even I find it difficult to be optimistic.

        (For those who don’t pay attention to Australian politics, they had the worst climate-change-aggravated wildfire season in history, causing apocalyptic skies in major cities and huge numbers of refugees. The governing party saw this, saw the scientific data, saw the amount of money donated to their reelection campaigns by the coal industry, then decided to blame “arsonists”, even suggesting that climate campaigners had committed arson to back up their campaigns.)

      5. EJ

        We’re only as doomed as we let ourselves be, I think. Trolls pushing absurd conspiracy theories are just trolls and can be ignored; however, people in positions of financial and governmental power who push absurd conspiracy theories are a problem. If we remove their power, they go back to just being a pack of sad trolls who can be ignored.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.