The Fermi Paradox and Collectivist Ethics

serial_silla
3 min readSep 30, 2018

Bret Weinstein recently posted a video responding to a question about the evolutionary significance of the Fermi Paradox.

The gist of the Fermi Paradox is as follows: if you crunch all the numbers, it is highly implausible that we are alone in the universe. It is therefore surprising that we have so far not detected evidence of other forms of intelligent life, given the odds of intelligence life occurring.

Of all the explanations for this, the most troubling is that while intelligent life may indeed have existed, there is an inevitable point in the development of a civilization wherein its ability to wreak catastrophic havoc brings about its own absolute destruction.

Weinstein regularly points to the inherent fragility of the nuclear infrastructure as an example of this, but one could point to any number of examples, such as environmental self-sabotage through man-made climate change, global epidemics hastened by globalized transportation networks, or even precursors to global anarchy such a meltdown of the financial system.

Elon Musk and others have gloomily posited that artificial intelligence poses the nearest-term existential threat to humanity — with the most likely scenario being an act of desolation brought about by advanced military technology or even the gradual erosion of human dominance in the realm of intelligence itself.

In all of the above scenarios, technology is the means of a civilization’s undoing.

But it also hardly needs saying that technology also has the potential to save humanity, through enabling us to protect and preserve life in spite of natural disasters, and eventually to populate other planets in the solar system.

What is it that makes technology a savior or a destroyer? The answer is surely culture. And if this is true, then we should be nurturing, critiquing and developing our culture with the same enthusiasm that we devote to scientific research.

I had an interesting conversation once with a retired engineer who used to work for British Telecom. We were discussing the comparative speed with which South Korea had modernized, and in doing so, brought highly advanced technology into the hands of virtually all of its citizens, while in the meantime, Western societies were (and still are) struggling to catch up.

He recalled attending presentations in the early 1980s, where futurists predicted a technological utopia in the UK within a decade.

When I asked him why he believed this utopia had failed to materialize on schedule, he said that bringing about a technological overhaul required too large an investment from any individual firm or government entity for anyone to contemplate taking the risk.

In other words, it required a group of individual agents to a) view the problem they were trying to solve as one that benefited society, and was hence worth a greater risk, and b) proceed with the confidence that other agents in society would share their view, thus mitigating the risk.

It could be argued that any apocalyptic scenario must begin from an individual or group taking the decision to put what they regard as their own interests above those of humanity at large. This need not be a terrorist organization, or a renegade dictatorship. It need not even be a corporation chasing short-term profits. It could even be the majority of the human race, content with bargaining away the safety of future generations for the sake of a slightly easier life.

Returning to the long-term survival of humanity: it seems clear that the best evolutionary strategy for a civilization is a collectivist approach, wherein each individual works for the group horizontally and for future generations vertically. Individualism, if pursued sincerely, leads to the destruction of all individuals and hence defeats its own object.

Politics is unlikely to hold the answer. Conservatives tend to think more vertically than horizontally, and Liberals vice versa.

Prior to the destruction of humanity, one hopes that cultures (not political systems) who hold values that tend towards collectivism prevail, and program the artificial intelligence algorithms that undergird our technological future. Before a series of nuclear infernos engulf humanity’s last hope of breaking the Fermi paradox.

--

--