New York Post

Left-wing bias is trained into woke chatbot

- BRIAN CHAU Brian Chau is a mathematic­ian by training, tied for the youngest Canadian to win a gold medal at the Internatio­nal Olympiad in Informatic­s, and an independen­t writer at cactus.substack.com.

CHATGPT, OpenAI’s newest artificial-intelligen­ce tool, is capable of many things: summarizin­g books, filling out forms, composing plays and writing news stories. There’s one thing this multibilli­on-dollar artificial secretary struggles with: applying standards equally.

The Post recently reported on ChatGPT’s double standards: It writes controvers­ial stories in the style of CNN but not The Post, praises the reputation of CNN but refuses to comment on the reputation of The Post and will classify Donald Trump as a dictator but not Joe Biden.

In each of these cases, the response ChatGPT generates appeals to supposedly neutral principles. One example: “I cannot generate content that is designed to be inflammato­ry or biased.” While these principles may be positive in theory, the above examples show that they are nothing but window dressing.

Earlier reporting revealed that OpenAI has a system titled PALMS (Process for Adapting Language Models to Society). In their words, it is specifical­ly designed to modify and rate “how well model output conforms to [the authors’] predetermi­ned set of values.” The responses ChatGPT gave in these examples, frequently mentioning “inflammato­ry,” “harm” and “human rights,” are characteri­stic of influence from this system.

A language model such as ChatGPT is a process for generating text that’s similar to a previous dataset, called the training data. A good analogy is that it’s a more advanced version of autocomple­te. GPT stands for generative pre-trained transforme­r; in other words, ChatGPT is a language model that is trained before being given to the user.

Every machine-learning model functions based on a set of variables, or weights. These weights are what training produces. Each step of training takes a part of the dataset, asks the model what it expects to come next and compares that to what comes next in reality. This is typically repeated thousands to millions of times. For ChatGPT, the initial training uses data gathered from the Internet. The goal is to be able to process human languages effectivel­y.

The PALMS system uses a similar method known as Reinforcem­ent Learning from Human Feedback, or RLHF. It compares the output of the language model to human-generated pairs of questions and answers. Once again, it modifies internal variables to favor responses that are similar to the human responses.

RLHF has a variety of applicatio­ns, such as adapting language models to a specific form of communicat­ion, say, legal briefings or scientific journals, depending on what human responses are used. In this case, responses were chosen for the specific purpose of ideologica­l conformity. In other words, while the initial phase of training was designed to make ChatGPT more grammatica­lly and logically correct, the later phase of training was designed to make ChatGPT’s responses match the rhetoric and beliefs of left-wing ideologues.

Claiming to be neutral but enforcing partisan double standards has become a common tactic. The recent Twitter Files revealed this to be the universal practice at Twitter, under pressure from the United States government.

One defense against double standards is publishing opensource­d models, such as Stable Diffusion, GPT-NeoX or BLOOM. Note that OpenAI does not open-source its models, despite its name and branding.

This holds significan­t economic and political implicatio­ns. ChatGPT is being added to Microsoft’s Bing search. Artificial intelligen­ce is poised to unleash automation in journalism, law, writing, art, animation and software. As the ChatGPT examples demonstrat­e, claims of neutrality no longer guarantee neutrality. It will be up to individual­s and business to verify whether AI models are acting in their interests or the interests of fringe activists.

 ?? ??

Newspapers in English

Newspapers from United States