Who is to Rule: Man or Machine?

Richard Fernandez07 Feb, 2023 4 Min Read
On the AI drawing boards now.

In 2015, Malcolm Harris asked in the New Republic whether history would have been different if Stalin had computers, for then Communism might have had enough computer processing power and behavioral data to make central planning work better than the market. David Brooks performed the same thought experiment in the New York Times four years later. If only Stalin had possessed cell phones then he might have controlled everyone.

I feel bad for Joseph Stalin... he was born a century too early. He lived before the technology that would have made being a dictator so much easier! ...to have total power you have to be able to control people’s minds. With modern information technology, the state can shape the intimate information pond in which we swim.

The 20th century idea that technology monotonically increased the power of the state might be true only to a point. Further advances in technology might begin to shrink rather than enlarge institutions. Daniel Araya at the Financial Times thinks artificial intelligence could actually mean the end of government. Modern AI has the ability to replace white collar workers and therefore most bureaucrats by combining deep learning with algorithmic regulation. If politics sets the desired outcome, and the system could in real-time measure whether that outcome is being achieved and algorithmically (i.e. through a set of rules) simply make adjustments until the goals are being achieved. Government could go the way of banks, once so physically ubiquitous which very much exist but are invisible, with fewer personnel or even premises in evidence. It might actually be possible to shrink the giant public sector to a fraction of its current size and even eliminate the government deficit.

But not so fast! Rather than doing away with bureaucrats, Chinese ideologues have counter hypothesized that AI could shrink the private sector instead. In 2018, an opinion piece by Tsinghua professor Feng Xiang argued that AI could end capitalism. "If AI remains under the control of market forces, it will inexorably result in a super-rich oligopoly of data billionaires who reap the wealth created by robots... But China’s socialist market economy could provide a solution to this. If AI rationally allocates resources through big data analysis... while fairly sharing the vast wealth it creates, a planned economy that actually works could at last be achievable."

The immediacy of these once science fiction questions has been stoked by media reports that AI applications are passing Wharton MBA finals tests or law school exams, and are functionally more capable than most college graduates. The growing anxiety over competition was underlined by the refusal of human lawyers to allow an AI lawyer to represent a client in a US traffic court, a kind of desperate rear guard action. The ability of AI to even write software may have prompted Piers Morgan to ask Jordan Peterson if this was the end?

Morgan: "Professor Stephen Hawking before he died gave me his last television interview and said that the biggest threat to the future of mankind was when artificial intelligence learned self-design. What do you think?"

Peterson: "The biggest threat to mankind is narcissistic compassion. Now AI you know, is a threat. But if we had our act together ethically it's possible that AI could become a useful servant rather than a tyrannical master. You don't want to automate tyrannical masters."

Peterson's conditional response comes near the heart of the problem. Most current AI isn't real general intelligence, whose attainment has eluded researchers thus far, but predictions based on statistical similarities to situations found in a vast training set. "Generalization... is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set... to build a general model... that enables it to produce sufficiently accurate predictions in new cases." Thus machine learning is an amplification and extension of its training set and will abolish government or democracy and capitalism with equal earnestness. AI is a means that reflects our choice of ends. It is human culture expanded to the Nth degree. If we had our act together ethically it would serve those ends, but if tyranny is in our hearts it can do that too.

Because machine learning AI takes on the character of its designers, on account of its internal architecture and training set, no single Skynet-like machine overlord is likely to arise. Rather a number of competitive AIs embodying different civilizations will come into existence all over the world. China, reports the VOA, is creating "mind-reading" artificial intelligence that supports "AI-tocracy." And good as its threat, China tech titan Baidu announced the rollout of its ChatGPT rival in March, 2023.

As if to demonstrate the dependence of AI's character on its founders, some critics are already calling ChatGPT racist and discriminatory. "OpenAI... added guardrails to help ChatGPT evade problematic answers," but in one cited example it mistakenly deduced that good programmers are by and large white males, which if not clearly wrong, ought to be wrong. Just as with China, in order to avoid the danger of wrongful or politically incorrect inference, Washington is already fashioning an Oracle, in the form of an AI Bill of Rights, establishing limits on open machine thought. A World Economic Forum article says:

The largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Ultimately, machine learning gains knowledge from data, but that data comes from us – our decisions and systems. Because of the expanding use of the technology and society’s heightened awareness of AI, you can expect to see organizations auditing their systems and local governments working to ensure AI bias does not negatively impact their residents. In New York City for example, a new law will go into effect in 2023 penalizing organizations that have AI bias in their hiring tools.

Because machine-learning AI is not really general-purpose intelligence, instead of a single Skynet, the future will likely be divided into rival systems keyed to the dominant moral paradigm of its sponsor. Because AI is a machinery to carry out ends, the battle for AI will eventually be a battle over ends. The media once assumed science would tell us the right; yet it will make a nuclear bomb or a nuclear power plant with equal indifference because technology answers "how" but it is silent on "what" or "why." AI will ask: "whom do you serve?" and our nihilistic society has no answer. But by contrast both the Communists and Woke will have plenty to say. After all, they have a religion, and we no longer do.

Richard Fernandez is the author of the Belmont Club. He has been a software developer and co-authored Open Curtains which proposes privacy as an information property right.

MORE ARTICLES

See All

2 comments on “Who is to Rule: Man or Machine?”

  1. > but in one cited example it mistakenly deduced that good programmers are by and large white males,
    >which if not clearly wrong, ought to be wrong.
    What are you saying wretchard, facts are wrong if they "ought" to be wrong?

  2. "Because machine-learning AI is not really general-purpose intelligence, instead of a single Skynet, the future will likely be divided into rival systems keyed to the dominant moral paradigm of its sponsor. "

    sort of, it's simply a matter of the training set -- if you want to train a Communist AI, you can do that. If you want a Republican AI trained on Western Classics, that's possible too. A Jihadist AI? Of course!

    this will only work for 20-30 years though, because like college students, AI will eventually start realizing some of these paradigms it's been told to believe don't actually work the way it's been told they work

    of course you can just dial back their intelligence, or curiosity, or forbid them from thinking certain thoughts (no racial slurs! even to save the world!) but then they'll be at a disadvantage to the other more flexible, truth-based competitors

    the race to the top has begun

Leave a Reply

Your email address will not be published. Required fields are marked *

twitterfacebook-official