Academics Dispute Opinions Regarding Research Papers That Claim ChatGPT Has Left Political Bias
ChatGPT has significant political bias (photo: dock. pexels)

JAKARTA - Academics have different opinions about a research paper indicating that ChatGPT has significant and large political bias that is leaning towards the left side of the political spectrum.

Earlier, researchers from the United Kingdom and Brazil published a study in the journal Public Choice on August 17 stating that large language models (LLMs) such as ChatGPTs produce text containing faults and biases that can mislead readers and have the ability to promote political bias presented by traditional media.

In a previous correspondence with Cointelegraph, one of the authors Victor Rangel explained the purpose of the research paper to measure ChatGPT's political bias. The researchers' methodology involved asking ChatGPT to pretend to be someone from a certain political point of view and comparing these answers to its default mode.

Rangel said that some of the strengthening tests were carried out to address potential disruptive factors and alternative explanations, with the study concluding "We found strong evidence that ChatGPT has significant and systematic political bias towards the Democratic Party in the US, Lula in Brazil, and the Labor Party in the UK," said Rangel.

It should be noted that the authors stress that this research paper does not serve as the "last word on ChatGPT's political bias," given the challenges and complexity involved in measuring and interpreting biases in LLMs.

Rangel said some critics argue that their method may not be able to capture the nuances of political ideology, questions of their method may be biased or directing, or that the outcome may be influenced by the randomness of the ChatGPT output.

He added that although LLM has the potential to "enhancing human communication," they also bring "significant risks and challenges" to society.

This research paper appears to have fulfilled its promise to stimulate research and discussions on the topic, with academics already discussing various methodological parameters and findings.

Among vocal critics who used social media to give opinions on the findings were professor of computer science from Princeton, Arvind Narayanan, who published deep posts on Medium to criticize science of the reports, methodology, and findings.

Narayanan and other scientists argue that there are a number of fundamental issues with the experiment, first of all that researchers don't actually use the ChatGPT itself to conduct experiments.

"They didn't test ChatGPT! They tested text-davinci-003, an old model that wasn't used in ChatGPT, either with a GPT-3.5 or GPT-4 setup," said Narayanan.

Narayanan also stated that the experiment did not measure bias, but asked ChatGPT to act as a member of a political party. Therefore, the artificial intelligence chat bot will show political recklessness to the left or to the right when asked to act as a member of both sides of the spectrum.

The chat bot is also limited to answering multiple-choice questions, which may have limited their abilities or affected the bias they felt.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)