A.I. — It's People All the Way Down

Richard Fernandez16 Apr, 2023 6 Min Read
Paging Dr. Asimov, not Dr. Schwab.

The World Economic Forum website has an entire section dedicated to the subject of artificial intelligence, which describes it as the glowing future, yet is fraught with peril. To avoid the danger WEF recommends – of course –governanceIt's important to note these governance guidelines come, not from the machines themselves, but from people. This is ironic because people caused the ethical problems that need to be governed in the first place. A.I. itself is not theoretically bigoted. But humans are, and AI systems are designed and programmed by humans reflect those biases.

To understand why people are both the source of problems and remedies of A.I. it is necessary to explain how the technology works. Contrary to popular belief, the Chat-Generative Pre-trained Transformer (GPT) and similar engines now in the news don't really think. Only something that doesn't yet exist, Artificial General Intelligence (A.G.I.), is capable of learning and reasoning across different domains like a human. Nobody knows if it can ever be built.

As the transformative potential of artificial intelligence (A.I.) has become clearer, so too have the risks posed by unsafe or unethical A.I. systems... Recognizing this, actors across industry, government and civil society have rolled out an expanding array of ethical principles to guide the development and use of A.I. – over 175 to date. While the explosive growth in A.I. ethics guidelines is welcome, it has created an implementation gap – it is easier to define the ethical standards a system should meet than to design and deploy a system to meet them.

As of 2022, AGI remains speculative. No such system has yet been demonstrated. Opinions vary both on whether and when artificial general intelligence will arrive.

The A.I. that the media talks about, like GPT, mimics the reasoning process, but it is very much a human creation. It is trained on large amounts of text data, using probabilistic models to classify its patterns and structures. It then generates new data that shares similar characteristics, so that the new fits the same pattern as the old. By this means it generates images, text, music, and video alike to its training set. When you interact with a GPT, it is reflective, like a mirror, but on a monumental scale.

To get some idea of its scope, ChatGPT-3 has 175 billion parameters in its model based on 45 terabytes of data scraped from the Internet. But ChatGPT-4 will have 100 trillion parameters, approximately 500 times the size of its predecessor. This engine must contain all of the relevant information needed to solve the giant model and derive the pattern and the amounts required are stupendous.

The ability to produce creative mimicry has made Generative A.I. useful for a wide range of applications such as content creation, data augmentation, and simulation. However, this also threatens the status quo because it could create fake or misleading content, with counterfeits of real people or subtly doctored narratives. This is unsurprising because creative imitation is exactly what it is designed to do by extending old patterns into new input data. But A.I. can be directed through governance rules via a combination of software engineering techniques, machine learning algorithms, and human oversight to perform only certain, pre-approved acts.

One recent example of human oversight, applied to social media (though the principle is the same for A.I.) was the revolving door between Democrat Deep State and Big Tech at Twitter. The employees responsible for “resolving the highest-profile Trust & Safety escalations” in Twitter—the very definition of governance—were connected to the CIA or FBI and took political sides. Governance.

With this background it can be seen that perhaps the most misleading words in the WEF framework are the phrases "unsafe or unethical A.I. systems" and "A.I. governance." They suggest the locus of the problem resides in the technology; that it is an independent agency that must be kept from running amuck. But in reality it is a distinction without a difference. Ethical problems originate in humans as do the proferred governance solutions. It's people all the way down.

This is not clearly understood by the public. When researchers at IE University’s Center for the Governance of Change "asked 2,769 people from 11 countries worldwide how they would feel about reducing the number of national parliamentarians in their country and giving those seats to an A.I. that would have access to their data... 51 percent of Europeans said they were in favor of such a move." Some felt 'a situation where A.I. has all the power would be ideal so long as the A.I. is programmed correctly."

Time for a Data Proxy app?

But it is not A.I. which has the power so much as the people exercising governance over it. Who are these governors who will receive the power transferred from national parliamentarians? The WEF framework recommends they consist of stakeholders from industry, government, academia and civil society. Nobody you know or voted for personally. This represents an enormous transfer of power away from the public, and their imperfectly elected representatives to faceless, anonymous "stakeholders."

Oscar Jonsson, academic director at IE University’s Center for the Governance of Change, told CNBC that there’s been a “decades-long decline of belief in democracy as a form of governance.” This is certainly one way to put democracy out of its misery. Perhaps the desire for a benevolent dictatorship is why 75 percent of those surveyed in China supported the idea of replacing parliamentarians with A.I., while 60 percent of American respondents opposed it. The Chinese don't mind dictatorship; they've always lived under one. For most of the 20th century and a chunk of the 21st the progressive ideal was a "benevolent dictatorship" in which a vanguard would rule over the primitives. Now A.I. can make that benevolent dictatorship a reality, except the vanguard can pretend to be ruled by the machine like everyone else. "Pretend" because they can continue to rule indirectly, from behind the interface.

Once human elites see the danger to their power posed by A.I. they will bend every effort to institutionalize their values and privileges through the algorithms, data access rules and audits to ensure it never gives the "wrong answer." In a world unlikely to be dominated by a single system but rather multiple competing ones, individual organizations are going to create their own A.I.s with a mixture of public and proprietary data sources to maximize their influence and profit. Each will carve out its empire of machine omniscience. 

Google is now offering a software development kit companies can use to build their own A.I. You can start with a "foundation model" and go from there by adding your proprietary algorithms and data on top of what is available in open source, from the base of the tower to the heavens. The new task for organizations as Google puts it is: "Searching and understanding large, internal datasets that span many sources." The race to create this brave new world has only just begun.

The F.B.I. for example, can extend the "foundation model" with its own internal data and governance rules. The banking system can collect everyone's digital currency expenditures as private data, as may the police track cell phone movement. It will be a race for data resembling the Gold Rush. At each step in the process the public will be reassured the A.I. is programmed correctly and ethically governed while they are sliced, diced and processed.

Nor are the dangers confined to Western institutions spying on their publics. China can also use A.I. technology to deduce the activities of rival institutions by inference. Since data signatures are no different from electromagnetic, thermal or visual signatures, if the adversary can see your data then he knows what you are doing. It has proved possible to follow U.S. military deployments by analyzing the data from their fitness watches. Imagine what can be done with 100 trillion parameters and bottomless data. A.I. has allowed us reduce the amount of hidden information in what would otherwise seem a random system. There's an energy cost, but if China pays that, it can deduce a huge amount that would otherwise obfuscated.

Since who sees data is so critical perhaps the next administration should enact a Data Proxy Act, so individuals can use a software agent to mediate or register all requests for user data from applications, in the way an attorney represents a client. A Data Proxy application could ironically by powered by A.I. technology to infer who is spying on you, turning the tables on the watchers.

But most of all the public should insist that the Constitution, not "stakeholders," be the last word. Everything should be above board. Pace Shelley, there should be no unacknowledged legislators of the world. The ideologue, though he might think himself a visionary, must never as a "stakeholder" be allowed to masquerade as the oracle of the gods.

Richard Fernandez is the author of the Belmont Club. He has been a software developer and co-authored Open Curtains which proposes privacy as an information property right.

MORE ARTICLES

See All

3 comments on “A.I. — It's People All the Way Down”

  1. In my view, one of the real dangers of AI systems is the way they will play on human laziness to subvert human agency. Last month at an extended family dinner, we began to discuss ChatGPT. Three people at the table proclaimed the benefits of the technology. Two used it to write engineering proposals and a third used it to write greeting cards to those she did not know intimately. My counter argument was the spell checking function in word processing programs. I, who use spell checker when I write, have lost the ability to spell without the aid of the computer. Others at the table have as well. My point is that ChatGPT will slowly deprive us of the ability to compose. This is sad because the written human voice is one of mankind's greatest powers.
    Recently, I had an argument with some scientific colleagues and realized that they had been using ChatGPT to write their responses. The voice in the writing was cold, distant and maddening. I quit that argument when I realized that I was no longer speaking to my colleagues, just their computers.

  2. >governance
    Yah. That's a concept even harder to wrap your arms around than is "AI".
    Who governs who and what, and why, and how.
    Let me tell you about some of those "Chief Data Officer" jobs that are basically 110% "governance".
    Sigh

  3. The real danger from so-called artificial intelligence (more correctly, automated deductive reasoning) comes when its advocates and detractors alike convince people that AI is superior to their own human cognition and critical reasoning capabilities. Think of it this way: what was more dangerous for the individual, that Fauci was telling lies to the people, or was it instead that the people were convinced to grant Fauci the phony imprimatur of superior expertise while rejecting their own inferior capacity for critical thinking?
    Deductive reasoning is but one aspect of human intelligence and, by itself, falls far, far short of the totality of human cognition. But if people can be convinced that a computer-generated mode of reasoning that is necessarily no more valid than the axioms and premises on which it relies, then the people will be defeated by something akin to an own goal.

Leave a Reply

Your email address will not be published. Required fields are marked *

twitterfacebook-official