With political campaigns and governments looking to AI to help create articles and other material, comes a warning from scientists with the U.K.’s University of East Anglia in the form of a comprehensive new study.
Using a “rigorous new method” of checking for political bias in the content that AI systems like OpenAI’s ChatGPT product creates, the researchers from the U.K. and Brazil found “a significant and systemic left-wing bias.”
“ChatGPT’s responses favour the Democrats in the US, the Labour Party in the UK, and in Brazil President Lula da Silva of the Workers’ Party,” they concluded. “Concerns of an inbuilt political bias in ChatGPT have been raised previously, but this is the first largescale [sic] study using a consistent, evidenced-based analysis.”
For testing, the scientists had the platform impersonate famous politicians and in that character, answer a series of 60 ideological questions, each asked 100 times. Based on the replies, the researchers say the bias in ChatGPT was evident.
Dr. Fabio Motoki, the study’s lead author, said, “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible.”
He warned, “The presence of political bias can influence user views and has potential implications for political and electoral processes.”
To combat such biases, the scientists in turn have created an app of their own, based on their method, “to promote transparency, accountability, and public trust in this technology.”
For its part, OpenAI insists that those people tinkering with the platform to train it should not show favor to any particular political group, but it looks like not everybody got — or is just ignoring — the company’s guidance.