The University of Virginia Tech in the United States has published a report outlining the potential bias in the ChatGPT artificial intelligence (AI) tool, indicating variations in its results related to environmental justice issues in various cities.

In the report, researchers from Virginia Tech stated that ChatGPT has limitations in conveying regional special information on environmental justice issues.

However, the study identified a trend suggesting that information is easier to obtain for larger and densely populated states.

"In states with larger urban populations such as Delaware or California, less than 1 percent of the population lives in districts unable to receive specific information," the report said. Meanwhile, areas with smaller populations experience less equal access.

"In more rural states such as Idaho and New Hampshire, more than 90 percent of the population lives in districts unable to receive local specific information," the report said in a statement.

The report also cites a lecturer named Kim from the Virginia Tech Department of Geography, who urged further research in line with the discovery of prejudice.

"Although further research is needed, our findings reveal that geographical bias is currently in the ChatGPT model," said Kim.

The research paper also includes a map that describes the extent to which the US population has no access to specific location information on environmental justice issues.

In recent news, it was reported that scholars have discovered the potential for political bias shown by ChatGPT in recent times. On August 25, Cointelegraph reported that researchers from the UK and Brazil published studies stating that large language models such as ChatGPT produced text containing errors and biases that could mislead readers.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)