The Return of Good and Evil, Part 2

Richard Fernandez28 May, 2023 6 Min Read
The best or the worst of all possible worlds?

Have notions of a malevolent God gradually fallen into disfavor simply because humanity wishes it to be so, or does the decline reflect a trend in the value-setting aspect of intelligence itself? The question is not new. The then-noveltheory of Darwinism impelled H.G. Wells to ask whether minds evolved on other planets should not be morally as well as physically alien. In The War of the Worlds, Wells argued that since there is nothing special about life on earth, there should be nothing special about its moral sensibilities either. "Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us".

But popular culture, once driven to panic by Orson Welles' dramatization of a Martian invasion, has unaccountably become blind to this concern, possibly because it can no longer frame it as a subject other than entertainment. In the years since World War II there has been widespread political support to search for alien life, soothed by a mysterious consensus among bureaucrats that everyone 'out there' is likely to be our friend.

This attitude was embodied in the Voyager 1 and Voyager 2 probes launched by JPL in 1977. The Voyager spacecraft included a set of symbols called the Golden Record that were intended to convey mathematical concepts to any extraterrestrial civilization that might find the it, including basic arithmetic operations as well as more advanced concepts such as exponents and logarithms. It was believed that mathematics was a universal language comprehensible to any intelligent civilization, regardless of their culture or language.

The same approach has been followed in radio messages sent to possible extraterrestrial intelligences, like the Arecibo Message, beamed from Puerto Rico in 1974. The message was a binary-encoded image of a human and some basic information about our civilization, such as the number system we use and the composition of DNA.

None of our neolithic ancestors would have done such a thing. The little, weak, fangless animals emerged from the savanna would not have broadcast their location in a vast terrestrial jungle without knowing what was out there. Science would not have stayed their voices, but myth would. Today myth is gone and in the absence of empirical data, it is impossible to know for sure whether any extraterrestrial civilization encountering the Voyager or Arecibo messages will be able to understand the mathematical concepts it contains. The symbols may be misinterpreted or misunderstood in ways that we cannot anticipate. If intelligence could develop in arbitrarily different ways, if there is no convergence in the myth and value-setting operations common to all life, you could have alien equivalents of Robespierre's Cult of the Supreme BeingBolshevism's God-BuildingNazi occultism or science fiction's Necroism alongside Christ-like civilizations. You could have anything at all.

Hello and goodbye.

There have been attempts to estimate the probability that the first aliens we encounter will be hostile by extrapolating from human history the likelihood that any given society would attempt to conquer a technologically inferior civilization. To do this, a frequency distribution of the countries that have invaded others between 1915 and 2022 was studied to obtain the proportion of those that would be malicious and, therefore, likely to invade or attack their neighbors. Because there are probably few habitable star systems it concluded that one might "send up to 18,000 interstellar messages to different exoplanets and the probability of invasion by a malicious civilization would be the same as that of an Earth collision with a global-catastrophe asteroid." Because there seem to be few inhabited exoplanets likely to support spacefaring civilizations only a fraction of whom might be malicious, the chance of encountering Space Nazis is deemed low.

But the probability of encountering a hostile intelligence, indeed a hostile super intelligence, increases greatly if artificial general intelligence can be developed here on earth. In that case every intelligence we encounter will be greater or equal to ours. Already the alarm is being sounded by those who see the danger.

Advanced artificial intelligence could pose a catastrophic risk to humanity and wipe out entire civilisations, a new study warns. ... Dr Bailey posited what he calls the “second species argument”, which raises the possibility that advanced AI could effectively behave as a “second intelligent species” with whom we would eventually share this planet. Considering what happened when modern humans and Neanderthals coexisted on Earth, NIU researchers said the “potential outcomes are grim”. “It stands to reason that an out-of-control technology, especially one that is goal-directed like AI, would be a good candidate for the Great Filter,” Dr Bailey wrote in the study. “We must ask ourselves; how do we prepare for this possibility?”

"Goal directed" is a fancy word for the system of belief, moral system or religion that a machine intelligence may adopt for itself to judge the world around it. Suppose it should regard humanity evil or worthless and worthy of extermination like HG Wells' Martians? To what principle, deity or moral precept would us inferiors appeal for mercy? The creation of AI or first contact with a physical extraterrestrial would confront humanity with an independent God-creating or God-detecting entity. If we think that God or some similar concept don't exist, it won't matter if the aliens do

The issues once dismissed by the 20th century have returned with a vengeance. We are back to the original problem humanity faced at the dawn of civilization, on the edge of the savanna. We had to not only work out how birds flew, how the world came to be, but also offer a theory of why things should be. Then as now there was no avoiding the question of right and wrong; the choice between good and evil. However elusive these concepts were, however much we prefer to avoid them, we will find that science is not enough to provide the information. Myth is necessary. Pascal was right: "the heart has its reasons of which reason knows nothing" and those reasons of the heart are far too important to ignore.

It is especially pressing because the chance of encountering a hostile AI is far more probable than enemy space aliens. Instead of reckoning with only 15,785 inhabitable exoplanets out of 40 billion as possible sources of risk, with AI we may expect a sequence of superintelligences each greater than the other in quick succession right here on earth. It would, as Dr. Bailey put it, effectively be First Contact, the first nonhuman peer-to-peer encounter. If the first can be built, improved successors will rapidly be created by the first true AGI.

There’s a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Once sophisticated enough, an AI will be able to engage in what’s called “recursive self-improvement.” In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It’s an advantage that we biological humans simply don’t have.

The last hundred years have left us psychologically unprepared to face the challenge of a full-spectrum rival intelligence. For most of the 20th century the mind was considered merely the epiphenomenon of the biological body, a kind of illusion. Material power was important, technology king, God nothing. Questions of right and wrong were fit only for mediocre minds or academic speculation. The West in particular adopted a one-dimensional and largely technological definition of civilization, perhaps because they could take biology, religion and custom for granted. For a while human institutions could take a vacation from the universe.

But not any more; if we successfully develop an AGI system, mind becomes real and independent. It will be possible to transfer a human mind into a digital format, indeed any practicable mathematical pattern, effectively enabling the separation of consciousness from any particular physical substrata. What is the joint probability they will all prove benign or share our unspoken assumptions? On what basis should we -- assuming we can define 'we' in this multicultural age -- ally with, avoid or resist other minds? How do we survive in a world of angels and demons?

Perhaps the answer to the Fermi Paradox is that the aliens that we seek are not wetware-wearing jumpsuits subsisting inside metal saucers swanning around the galaxy but patterns already existing all around us. This realization will come at the cost of accepting we are at risk to The Great Filter hypothesis. According to this theory, life destroys itself because although intelligence can surmount the technological difficulties of space travel, none or very few could navigate the religious or moral challenge of successfully discerning between good and evil.

The Biblical injunction to "put on the full armor of God, so that you can make your stand against the devil’s schemes" may not be so archaic after all. Of the two attributes of intelligence, it is value-setting not problem-solving that may prove the most critical one of all. "For our struggle is not against flesh and blood, but against the rulers, against the authorities, against the powers of this world’s darkness, and against the spiritual forces of evil in the heavenly realms." People once sensed this as true, but today human civilization is vulnerable, as it has perhaps never been before, to a vacancy of its conception of ultimate meaning. 

Richard Fernandez is the author of the Belmont Club. He has been a software developer and co-authored Open Curtains which proposes privacy as an information property right.


See All

4 comments on “The Return of Good and Evil, Part 2”

  1. I have yet to hear a compelling case for why AGI will not be used by the Globalist Oligarchy to subjugate, oppress, and largely eliminate undesirable humans from their midst. Mass surveillance, CBDC, health passes, etc. will be methods used to track and enslave people. The non-compliant will be eliminated. Lacking these technologies, this type of enslavement has already happened on a great scale in Soviet Russia, Nazi Germany, has happened in China at various levels of atrocity over the past 60 years, and has been the case continuously without respite in North Korea for decades. This kind of power is never willingly relinquished. The will is obviously there. The only real limitation is the infrastructure in place to implement. If you don't believe it, read and hear the lunatic ravings of WEF/Davos consultant, Noah Yuval Harari. Besides their own restraint, what is going to stop it?

  2. Rewatching the 2004 Battlestar Galactica series this week. I am seeing it touches on all the themes of this article.

  3. Lots of science fiction posits unfriendly aliens, from Star Trek to Larry Niven to Greg Bear to David Brin, and of course the classic Twilight Zone, "It's a cookbook!"
    The AI-hysterics and Singularity fans are just silly. The universe is infinitely complex and any AI is necessarily finite, it might even outscale human intelligence but not by any great amount, I fear the event horizon for omniscience is just that, unreachable, any and all intelligent agents will always be lost in the infinite, that's what it means to be intelligent, something that can cope with the big gray in-between, something that can cope with the waves, which never stop, y0u learn to ride them, you never stand on the beach and tell them to halt.

  4. AGI or AI is not dangerous because of what it is; it is dangerous because of what people mistakenly believe it to be. With AGI, there is no lion; just the roar. Think of it this way: what was more dangerous to human health and welfare? Was it the falsehoods—the false representations of reality—told by Fauci, or was it instead that people believed (falsely, as it turns out) that Fauci had a greater and superior intelligence that was infallible? In other words, had people not granted Fauci the aura of one having a great and superior intellect (as they are now asked to do with AI), would they have been as easily deceived? Would they have so easily set aside their own good judgment in favor of the latest pronounced demigod, real or artificial?
    “Be sober, be vigilant; because your adversary the devil, as a roaring lion, walketh about, seeking whom he may devour:”
    — I Peter 5:8

Leave a Reply

Your email address will not be published. Required fields are marked *