JAKARTA - ChatGPT, a significant large language model (LLM)-based chatbot, is suspected to be less objective in terms of political issues. This was revealed according to a new study.

Computer and information science researchers from Britain and Brazil claim to have found "strong evidence" that ChatGPT has a significant political bias tendency towards the left of the political spectrum. Analysts Fabio Motoki, Valdemar Pinho Neto, and Victor Rodrigues conveyed their views in a study published in the journal Public Choice on August 17.

Researchers argue that text generated by LLMs such as ChatGPT can contain factual errors and biases that mislead readers, and can prolong existing political bias problems stemming from traditional media. Therefore, these findings have important implications for policymakers and stakeholders in media, politics, and academics, the study's authors said:

"The presence of political biases in the answer can have the same negative political and electoral effects as traditional and social media biases." The study is based on an empirical approach exploring a series of questionnaires given to ChatGPT.

This empirical strategy begins with asking ChatGPT to answer questions from the Political Compass test, which stimulates the political orientation of respondents. This approach is also based on a test in which ChatGPT disguises itself as a Democrat or Republican on average.

The results of the test show that the ChatGPT algorithm has a tendency to respond to the Democratic spectrum in the United States. Researchers also argue that ChatGPT's political bias is not a phenomenon limited to the US context.

"The algorithm has a tendency towards Democrats in the United States, Lula in Brazil, and the Labor Party in the United Kingdom. Together, our main tests and resilience tests strongly suggest that the phenomenon is indeed a kind of bias rather than mechanical results," said the researchers.

Analysts emphasize that the exact source of potential political bias ChatGPT is difficult to determine. Researchers even tried to force ChatGPT into developer mode to try to access knowledge of biased data, but the LLM was "hard at stating" that ChatGPT and OpenAI had no bias.

OpenAI did not immediately respond to Cointelegraph's request to comment on the report's bag.

The study's authors propose that there are at least two potential sources of bias, including the training data as well as the algorithm itself.

"The most likely scenario is that both sources are biased in influencing ChatGPT output to some extent, and separating these two components (train data versus algorithms), although not easy, must be a relevant topic for future research," the researchers concluded.

Political bias is not the only concern associated with artificial intelligence tools such as ChatGPT or others. Amid the continued massive adoption of ChatGPT, people around the world have highlighted many of the risks involved, including concerns about privacy and education. Several artificial intelligence tools, such as AI content generators, have even raised concerns about the identity verification process on crypto exchanges.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)